id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
4636545
pes2o/s2orc
v3-fos-license
Triptolide Inhibits the AR Signaling Pathway to Suppress the Proliferation of Enzalutamide Resistant Prostate Cancer Cells Enzalutamide is a second-generation androgen receptor (AR) antagonist for the treatment of metastatic castration-resistant prostate cancer (mCRPC). Unfortunately, AR dysfunction means that resistance to enzalutamide will eventually develop. Thus, novel agents are urgently needed to treat this devastating disease. Triptolide (TPL), a key active compound extracted from the Chinese herb Thunder God Vine (Tripterygium wilfordii Hook F.), possesses anti-cancer activity in human prostate cancer cells. However, the effects of TPL against CRPC cells and the underlying mechanism of any such effect are unknown. In this study, we found that TPL at low dose inhibits the transactivation activity of both full-length and truncated AR without changing their protein levels. Interestingly, TPL inhibits phosphorylation of AR and its CRPC-associated variant AR-V7 at Ser515 through XPB/CDK7. As a result, TPL suppresses the binding of AR to promoter regions in AR target genes along with reduced TFIIH and RNA Pol II recruitment. Moreover, TPL at low dose reduces the viability of prostate cancer cells expressing AR or AR-Vs. Low-dose TPL also shows a synergistic effect with enzalutamide to inhibit CRPC cell survival in vitro, and enhances the anti-cancer effect of enzalutamide on CRPC xenografts with minimal side effects. Taken together, our data demonstrate that TPL targets the transactivation activity of both full-length and truncated ARs. Our results also suggest that TPL is a potential drug for CRPC, and can be used in combination with enzalutamide to treat CRPC. Introduction Prostate cancer (PCa) is among the most common adult malignancies and is the second leading cause of cancer death in men. In 2016, there were an estimated 180,890 diagnoses and almost 26,120 deaths in American men. [1] In past decades, surgical or medical androgen deprivation therapy (ADT) has been the primary treatment paradigm for PCa patients. [2] However, almost all ADT treatments eventually fail due to the development of metastatic castration-resistant prostate cancer (mCRPC). Recently, 'second-generation' drugs, such as enzalutamide (also named MDV3100), abiraterone acetate, and cabazitaxel, have been administered to patients who develop mCRPC. [3][4][5][6][7] Enzalutamide is an AR antagonist which has an 8-fold higher affinity for AR than bicalutamide, and inhibits the ability of Ivyspring International Publisher AR to translocate into the nucleus and bind DNA. [8] Although these new drugs have shown improved efficacies in the clinic, nearly all treated mCRPC patients eventually develop resistance to these agents, possibly due to amplification or gain-of-function somatic mutation of the AR gene, aberrant posttranslational modification of the AR protein, alternative splicing events that result in hyperactive receptors, and cofactor dysregulation and/or intracrine androgen synthesis. [8] Compared to hormone-naïve cancers, most CRPCs contain AR splice variants (AR-Vs), which are often up-regulated. Such selection for AR-Vs in CRPC has also been demonstrated in several preclinical models. [9,10] These AR-Vs lack the ligand binding domain (LBD), and thus are insensitive to drugs such as enzalutamide that target the LBD. AR-Vs remain constitutively active as transcription factors in a ligand-independent manner. [11,12] Two major AR-Vs, AR-V7 (also called AR3, which contains exons 1-3 and CE3, a small expressed tag after exon 3) and AR-V567es (also known as AR-V12, which contains exons 1-4 and exon 8), are capable of regulating gene expression in the absence of the full-length AR protein (AR-FL). [13] Patients with high AR-V7 or detectable AR-V567es expression levels have significantly shorter cancer-specific survival than other CRPC patients. [14] In CRPC xenografts, resistance to both abiraterone and enzalutamide is associated with increased expression of AR truncated variants. [15,16] Furthermore, it has been reported recently that in CRPC cells expressing both endogenous AR-FL and AR-Vs, AR-Vs drive resistance to enzalutamide by functioning as independent drivers of the AR transcriptional program. [14,17,18] Thus, combining enzalutamide with an AR-V-targeting agent may be a viable approach to overcome resistance to enzalutamide. Based on the important roles of AR dysfunction, including overexpression, mutation and the presence of AR truncated variants, in the development of mCRPC and resistance to clinical drugs, the search for novel compounds that target AR signaling has become a hotspot in PCa research. Natural compounds provide an excellent resource for finding novel antiandrogens. Several natural products were recently reported to show an anti-PCa effect by targeting AR or AR-Vs. [19][20][21] Maytenus royleanus extract had potent growth inhibition and apoptosis induction effects on PCa in vitro and in vivo. [19] Urolithins from walnut polyphenol metabolites suppressed the proliferation of PCa cells by repressing AR expression. [20] The marine compound Rhizochalinin (Rhiz) re-sensitized AR-V7-positive PCa cells to enzalutamide and had a pronounced anti-cancer effect on enzalutamide-and abiraterone-resistant AR-V7-positive cells, indicating that Rhiz is a potential drug for treating PCa patients with AR-V7 expression and enzalutamide or abiraterone resistance. [21] Sintokamide A (SINT 1) inhibited the growth of enzalutamide-resistant PCa cells with AR-Vs and induced regression of CRPC xenografts by binding to the activation function-1 (AF-1) region in the N-terminal domain of AR and suppressing the transactivation of both AR and AR-Vs. [22] We have focused our attention on Triptolide (TPL), a major active compound extracted from the Chinese medicine "Thunder God Vine" (Tripterygium wilfordii Hook F.), which possesses potent anti-cancer, anti-fertility, anti-inflammatory and immunosuppressive properties. [23] Our previous study has proved that TPL has potent anti-PCa effects. [24] However, the severe toxicity of TPL and our poor understanding of the mechanism of its anti-cancer effect on CRPC has limited its clinical use. In this study, we found that TPL inhibits the transactivation activity of both AR and AR-Vs, and reduces phosphorylation of AR at Ser515 through XPB/CDK7. A low dose of TPL inhibits the growth of CRPC cells and has a synergistic effect with enzalutamide in vitro. TPL also enhances the anti-cancer effect of enzalutamide on CRPC xenografts. In all, our data indicate that the combination of TPL with enzalutamide is a potential therapeutic treatment for CRPC. Cell culture and reagents LNCaP, 22Rv1, PC3, DU145, and 293T cells were purchased from the Cell Bank of Type Culture Collection of the Chinese Academy of Sciences (Shanghai, China) and maintained in RPMI-1640 supplemented with 10% FBS and 100 units/ml of penicillin and streptomycin. All the cell lines were recently authenticated by short tandem repeat analysis at the Cell Bank of Type Culture Collection of Chinese Academy of Sciences and the Genetic Testing Biotechnology Company (Suzhou, China, April 2016). C4-2 cells were provided and authenticated by Dr S. Li. C4-2/Luci and C4-2/AR-V7 cells were generated by infecting C4-2 cells with either control virus pLVX-Luci or virus expressing AR-V7 (pLVX-AR-V7). C4-2 shCTL and C4-2 shXPB were generated by infecting C4-2 cells with either a scrambled shRNA or pLKO.1 encoding shXPB. Infected cells were maintained in RPMI 1640 medium containing 1 mg/ml puromycin. To generate enzalutamideresistant C4-2 (C4-2R) cells, C4-2 cells were chronically exposed to increasing concentrations of enzalutamide (5-20 μM) by passage for more than 6 months and maintained with 20 μM enzalutamide. C4-2 parental cells were passaged as a control. TPL (> 98% purity) was purchased from π-π Technologies, Inc (Guangzhou, China), and dissolved in DMSO at a stock concentration of 100 mM. Enzalutamide (MDV3100, Cat #: S1250) was from Selleck Chemicals (Houston, TX, USA). BS-181 (Cat #: HY-13266) was from MedChem Express (USA). Plasmids are detailed in the Supplementary Materials and Methods. Transcriptional reporter assay LNCaP cells (1×10 5 cells/well) were transfected with 300 ng of pGL3-PSA-Luc reporter plasmid or the control plasmid along with 300 ng AR-FL or AR-V7. Transactivation of the AR NTD (AR 1-558) was measured by co-transfecting LNCaP cells with 5×UAS-TATA-luciferase and AR-(1-558)-Gal4DBD or Gal4DBD for 24 h prior to treatment with TPL (6.25 nM) for 1 h. Cells were then incubated with forskolin (Sigma; 50 µM) or IL-6 (Peprotech, Rocky Hill, NJ; 50 ng/ml) or vehicle for an additional 24 h. Transfected cells were incubated for 24 h in the absence and presence of 1 nM R1881 with or without inhibitors. Luciferase activity was determined using a dual luciferase reporter assay system (Promega). Luciferase activities were normalized to the protein concentrations of the samples. RNA isolation, reverse transcription, and quantitative real-time PCR The procedures are described in the Supplementary Materials and Methods, and the qPCR primers are listed in Table S1. Chromatin immunoprecipitation (ChIP) assay Cells were plated in 150 mm dishes (7×10 6 cells) in RPMI 1640 supplemented with 8% CSS (charcoal-stripped FBS) for 72 h. Cells were pretreated with TPL or DMSO as vehicle for 1 h before treatment with 1 nM R1881 or vehicle (ethanol) for 6 h. ChIP assays were performed using a ChIP kit (Millipore, according to the manufacturer's instructions. Antibodies against AR (C-19, sc-815, Santa Cruz), CDK7 (sc-856, Santa Cruz), XPB (sc-293, Santa Cruz), FLAG (Sigma) or RNA pol II S5 (ab5408, Abcam) were used. IgG antibody was used as negative control. The bound DNA was amplified by qPCR using the primers listed in Table S1. Viability assay The MTT assay or SRB assay was used to quantify the cell viability. The procedures are described in the Supplementary Materials and Methods. Clonogenic assay PCa cells were seeded in triplicate at a density of 1000 cells/well into 12-well plates. After two days, cells were treated with different concentrations of TPL for 14 days. Colonies were fixed with methanol and stained with 0.5% crystal violet (Sigma). Colonies with 50 cells or greater were counted in each well. Calculation of Combination Index by the Chou-Talalay Method The type of interaction between Triptolide and enzalutamide was evaluated by comparing the cytotoxic effects obtained after simultaneous exposure to the drugs. The combination index (CI) was calculated using Calcusyn 2.0 software based on the Chou-Talalay method [25,26]: CI < 1 indicates a synergistic effect, CI = 1 indicates an additive effect, and CI > 1 indicates an antagonistic effect. Animal study The animal study (Project number: YJ32) was approved by the Institutional Animal Care and Use Committee of the Model Animal Research Center at Nanjing University and strictly followed ethical regulatory standards. Six-week-old male nude mice (STOCK-Foxn1 nu/Nju ) were inoculated subcutaneously with 5×10 6 22Rv1 cells suspended in 50% Matrigel in both dorsal flanks. The day following inoculation, mice were randomly divided into four groups (n = 10) and treated every other day for total 3 weeks as follows: (1) Statistical analysis Statistical analysis was performed using GraphPad Prism (version 6.01; GraphPad Software). Except where specified, comparisons between groups were performed with 2-tailed Student's t test, and differences were considered statistically significant at p < 0.05. TPL inhibits AR transcription activity Persistent AR activity through mutation of the AR gene in CRPC cells accounts for the failure of enzalutamide therapy. Hence, we assessed whether TPL attenuates the ligand-dependent and ligand-independent transactivation activities of AR. We found that the expression of luciferase, drive by the PSA promoter, was inhibited in the presence of enzalutamide (10 μM) or TPL (6.25 nM), suggesting that a low dose of TPL can inhibit R1881-induced AR transactivation activity without affecting the levels of endogenous AR ( Figure 1A and 1B). Next, we co-transfected LNCaP cells with a reporter vector containing the Gal4 binding site and an expression vector encoding a chimeric protein consisting of the N-terminal domain (NTD) of human AR (AR-NTD; amino acids 1-558) fused to the Gal4-DNA binding domain (Gal4DBD), which can be activated by forskolin (FSK) or IL-6 in the absence of androgen and serum to stimulate PKA and STAT3 signaling, respectively. The results showed that TPL reduces both FSK-and IL6-induced transactivation activity of AR-NTD ( Figure 1B and 1C) without affecting the levels of the fusion protein AR-NTD-Gal4DBD ( Figure 1E and 1F). TPL treatment also significantly inhibited endogenous AR activation, induced by FSK or IL6, without changing the AR protein level ( Figure S1A and B). These data indicate that TPL possesses the ability to inhibit both ligand-dependent and ligand-independent transactivation of AR, and the effect is not due to a reduction in the AR protein level. Next, to directly examine whether TPL can inhibit the transactivation activity of the AR variants AR-V567es and AR-V7, we co-transfected plasmids expressing AR-FL and AR variants (pcDNA3-AR-FL/-AR-V567es/-AR-V7) and a reporter plasmid (pGL3-PSA-Luc) into 293T cells, which do not express endogenous AR. The results showed that TPL inhibits the transactivation activity of AR-FL and the two AR variants without affecting their protein levels ( Figure S1C and D). Similar results were also observed in AR-negative PC3 cells ( Figure S1E and F). Therefore, we demonstrated that TPL can inhibit the ligand-dependent and ligand-independent transactivation activity of AR-FL and AR variants. To further examine the effects of TPL on AR transactivation activity, we examined the mRNA levels of endogenous target genes of AR-FL (PSA, FKBP5, TMPRSS2) and AR-V (UBE2C and AKT1). In LNCaP cells, which express only AR-FL, TPL (6.25 nM) effectively repressed R1881-induced transcription of AR target genes ( Figure 1G). TPL also suppressed the expression of AR-FL and AR-Vs target genes in 22Rv1 cells in a dose-dependent manner, without changing the AR mRNA level ( Figure 1H). The results were validated by western blotting for AR and PSA protein levels ( Figure 1I and 1J). Taken together, these results show that TPL inhibits the transactivation activity of AR-FL and AR variants at a low concentration (6.25 nM) without affecting their protein level. TPL inhibits the transactivation activity of AR through CDK7 and XPB Since we observed that the activity of AR is suppressed by TPL in PCa cells with no effects on its expression, we hypothesized that TPL may affect AR at the post-translational level. Interestingly, we found that 6.25 nM TPL decreased the level of AR phosphorylated at Ser515 (pAR S515) in LNCaP cells and the CRPC cell line C4-2/AR-V7 (Figure 2A), which stably expresses the AR-V7 variant ( Figure S2A) and is more resistant to enzalutamide than control C4-2/Luci cells ( Figure S2B). Given that CDK7 in the TFIIH complex is responsible for phosphorylating AR at Ser515, [27] we examined CDK7 as well as XPB and RPB1, two other proteins in the TFIIH complex. We found that 6.25 nM TPL did not change their expression levels ( Figure 2A). To further examine whether CDK7 is involved in the effect of TPL on pAR S515, we treated PCa cells expressing AR-FL or the AR-V7 variant with the selective CDK7 inhibitor BS-181. We found that BS-181 has a similar effect to TPL on the level of pAR S515 ( Figure 2B), which indicates that TPL may decrease the level of pAR S515 by indirectly affecting the activity of CDK7. Furthermore, to determine whether TPL inhibits AR transcriptional activity by reducing the level of pAR S515, we co-transfected 293T cells with the reporter plasmid PSA-Luc and plasmids expressing AR with different phosphorylation statuses, either AR/WT (wild-type AR) or AR/S515E (constitutively activated phosphorylation mutant). Transfected cells were pretreated with TPL prior to incubation with R1881 for 24 h. As shown in Figure 2C, TPL treatment significantly inhibits AR/WT transactivation activity, whereas AR/S515E abrogates the effect of TPL. This indicated that the phosphorylation of AR at Ser515 is essential for the suppression of AR activity by TPL. Next, to examine the effects of TPL on the DNA binding activity of AR, we performed ChIP assays and found that TPL significantly suppressed R1881-mediated AR binding to the androgen response element (ARE) and enhancer of the PSA gene. The binding of XPB and CDK7 to the promoter of the PSA gene was also decreased ( Figure 2D). Consequently, less recruitment of RNA pol II (pS5) was found, which is consistent with the observation that TPL induces disrupted binding of AR and TFIIH. Similar experiments were also performed in C4-2/AR-V7 cells, which stably express AR-V7. TPL significantly inhibited the recruitment of these proteins to the PSA promoter, independent of the presence of R1881 ( Figure 2E). Together, these data indicate that TPL also suppresses AR-mediated transcriptional activation by inhibiting AR binding and RNA pol II recruitment to target gene promoters. TPL was found to directly bind to XPB protein within the TFIIH complex, [28] which also contains CDK7 and XPD. Since XPD mutation was reported to suppress CDK7-mediated phosphorylation of AR at Ser515, [27] we hypothesized that TPL may function through XPB to suppress CDK7-mediated phosphorylation of AR at Ser515. Depletion of XPB by RNA interference reduced the level of pAR S515 while the CDK7 protein level was unaffected ( Figure 3A). Accordingly, knockdown of XPB suppressed the transactivation activity of AR/WT, and had no effect on AR/S515E ( Figure 3B). These results indicate that TPL suppresses CDK7-mediated phosphorylation of AR at Ser515 through XPB. Moreover, knockdown of XPB suppressed the occupancy of AR on the PSA promoter, and the recruitment of RNA pol II (pS5) to the PSA transcription start site ( Figure 3C). These results demonstrate that XPB is the TPL-sensitive mediator in the AR signaling pathway. Low concentrations of TPL inhibit PCa cell growth We and others have proved that TPL has effective anti-cancer activity against PCa at high concentrations (micromolar). [24,29,30] Based on the aforementioned results on AR transactivation with TPL in the nanomolar range, we were interested to examine whether low TPL concentrations (which are likely to be less toxic in vivo) still possess anti-PCa effects, particularly on CRPC cells. As shown in Figure 4A, low concentrations of TPL significantly reduced the R1881-stimulated growth of LNCaP cells over a period of 4 days with an IC50 of 6.25 nM. Meanwhile, low concentrations of TPL significantly suppressed the colony-forming ability of C4-2 and 22Rv1 cells in a dose-dependent manner ( Figure 4B). These results confirm that low concentrations of TPL also inhibit the growth of PCa cells. We examined whether low TPL concentrations also show anti-PCa effects on the C4-2/AR-V7 cell line, which stably expresses AR-V7. The results showed that TPL suppresses the viability and colony-forming ability of both C4-2/Luci and C4-2/AR-V7 cells in a dose-dependent manner ( Figure 4C and 4D). Moreover, TPL significantly reduced the R1881-induced expression of PSA in C4-2/AR-V7 cells ( Figure 4E). These results indicate that low concentrations of TPL have similar cytotoxic effects on PCa cells expressing AR variants. In addition, we also examined the ability of low concentrations of TPL to induce apoptosis, as this is considered to be the major mechanism underlying the anti-cancer activity of TPL. The FACS data showed that TPL at a dose of 6.25 nM has a moderate apoptosis induction effect on LNCaP cells and a weak effect on C4-2/Luci and C4-2/AR-V7 cells, while higher concentrations of TPL have a marked apoptosis induction effect on all three cell lines ( Figure S3). These results indicated that at low concentrations, such as 6.25 nM, TPL may exert its anti-PCa activity mainly through cell growth inhibition, while higher concentrations of TPL act through both cell growth inhibition and apoptosis induction. Collectively, our data reveal that low concentrations of TPL also have effective anti-cancer activity on PCa cells expressing AR-FL or AR variants. Co-treatment with TPL and enzalutamide suppresses cell growth and induces apoptosis in CRPC cells To further examine the effects of TPL on CRPC cells, we generated an enzalutamide-resistant PCa cell line C4-2R (C4-2 enzalutamide resistant) by continuous culture of C4-2 cells in medium containing enzalutamide for 7 months. The C4-2R cells showed more resistance to enzalutamide than the parental C4-2 cells (Figure S4A), and expressed higher protein levels of nuclear AR-V7 ( Figure S4B-E). Interestingly, we found that the viability of C4-2R cells was significantly reduced by TPL in a dose-dependent manner ( Figure 5A). This indicates that TPL can overcome the enzalutamide resistance of PCa cells expressing AR-V, and further suggests that TPL may be combined with enzalutamide against CRPC. To examine the combined effects of the two agents, we treated 22Rv1 cells with TPL (6.25 nM) in the presence or absence of enzalutamide for 4 days. The results showed that the co-treatment with TPL and enzalutamide has a stronger inhibitory effect on the viability of 22Rv1 cells than TPL or enzalutamide alone ( Figure 5B). Similar results were also observed in the enzalutamide-resistant cell line C4-2R ( Figure 5C). Co-treatment with TPL and enzalutamide had a stronger inhibitory effect on the colony-forming ability of 22Rv1 and C4-2R cells ( Figure 5D). Co-treatment also induced higher levels of the cleaved products of two markers of apoptosis, Caspase-3 and PARP, in 22Rv1 and C4-2R cells ( Figure 5E), indicating that the combined-treatment is also more cytotoxic to CRPC cells. These data suggest that low concentrations of TPL may have a synergistic effect with enzalutamide against PCa cells in vitro. We then examined the viability of 22Rv1 and C4-2R cells that were exposed to individual or combined TPL and enzalutamide at various concentrations, and calculated the combination index (CI) between TPL and enzalutamide using the Chou-Talalay method. As shown in Figure 5F, the results demonstrated that the combined TPL and enzalutamide treatment at low concentrations (<60 nM for TPL and <200 μM for enzalutamide) had a synergistic effect on the growth inhibition of both 22Rv1 and C4-2R cells. Taken together, our data demonstrate that low concentrations of TPL have a synergistic anti-PCa effect with enzalutamide in vitro. Co-treatment with TPL and enzalutamide may therefore be a potential therapeutic program for CRPC. Co-treatment with TPL and enzalutamide inhibits CRPC tumor progression in vivo To evaluate the anti-PCa effect of co-treatment with TPL and enzalutamide in vivo, we made a CRPC xenograft model in mice using 22Rv1 cells. Male nude mice bearing 22Rv1 xenografts were randomized and treated with either TPL (75 μg/kg), enzalutamide (25 mg/kg), or both for 3 weeks. At the end of 3 weeks, the mice were euthanized and xenografts were collected for further analysis. As shown in Figure 6, co-treatment with TPL and enzalutamide is highly effective at reducing the volumes of xenograft tumors, compared with the moderate effect of TPL and the slight effect of enzalutamide ( Figure 6A). Accordingly, the xenograft tumors in the co-treatment group appeared smaller than those from the other groups ( Figure 6B). The weight of xenograft tumors from the co-treatment group was also lower than those from the other groups ( Figure 6C). Moreover, the PSA level in mice in the co-treatment group was the lowest among the four treatment groups ( Figure 6D). Histologic analysis showed that the cells in the xenograft tumors from the co-treatment group were packed together less tightly than those in the other groups ( Figure 6E). IHC analysis revealed that the xenograft tumors from the co-treatment group showed less Ki-67 and AR staining, and more cleaved-Caspase-3 staining, than the other groups ( Figure 6E-G), which indicated that the xenograft tumors undergo less proliferation and more apoptosis under co-treatment with TPL and enzalutamide. These results demonstrate that TPL and enzalutamide together show more effective anti-CRPC activity than either drug alone, which indicates that TPL enhances the anti-cancer effect of enzalutamide on CRPC in vivo. We also examined the body weights of the mice during treatment and found that none of the three treatments had an obvious effect on growth ( Figure S5A). No obvious changes were detected in the weight and tissue structure of essential organs, such as lung, heart, liver, spleen, and kidney ( Figure S5B and C). These data also demonstrated that a low dose of TPL, either alone or in combination with enzalutamide, has no significant toxicity to mice. Taken together, our results indicate that co-treatment with TPL and enzalutamide may be a potential treatment for CRPC. Discussion As a representative of 'second generation' drugs and as an AR antagonist, enzalutamide has many favorable effects on mCRPC patients based on data from some phase I/II/III trials. However, patients who respond to enzalutamide usually relapse within 1 to 2 years, which is mainly due to the dysfunction of AR or AR variants. [31] Therefore, there is an urgent need for new agents that can overcome enzalutamide resistance. Our study revealed that TPL at low concentrations reduced the level of pAR S515 via XPB/CDK7, and reduced the binding of AR and recruitment of phosphorylated RNA Pol II (S5) to the promoters of AR target genes, resulting in the inhibition of AR transactivation activity. Low concentrations of TPL also inhibited the growth of PCa cells that express AR-FL or functional AR variants. Furthermore, TPL and enzalutamide together showed synergistic and efficient anti-CRPC effects in vitro and in vivo, including inhibition of CRPC cell proliferation and survival, without noticeable side-effects. Our data indicate that the combination of TPL with enzalutamide is a potential therapeutic treatment for CRPC. The role of AR variants in the mechanism underlying enzalutamide resistance is supported by clinical data. [31] The most common AR-V isoforms associated with CRPC and subsequent metastases are AR-V7 and AR-V567es. In addition to regulating AR-FL downstream targets, AR-Vs also regulate a unique set of genes, most of which are associated with mitotic and anti-apoptotic functions, e.g. UBE2C. [32] Overexpression of UBE2C has been correlated with the presence of AR-V7 in clinical samples. [18,33] Hence, several drugs, such as EPI-001 and its analogs, which disrupt the transactivation of AR and AR-V in particular are more effective at preventing the growth of CRPCs. [34] Some compounds, such as 20(S)-protopanaxadiol-aglycone, Galeterone and its analog VNPT55 exert their anti-CRPC effect by depleting the AR-FL and AR-V protein levels or inducing AR-FL and AR-V degradation. [35][36][37][38] In our study, we found that TPL at low concentration effectively suppresses the transactivation activity of both AR-FL and AR-Vs, without changing their expression levels. However, such inhibition is mediated by XPB/CDK7 in the TFIIH complex, not by direct binding of TPL to AR, as we did not detect direct interaction between TPL and AR-FL or AR-NTD (AR-AF1) using Biomolecular Interaction Analysis (BIA, data not shown). Herein, we found that TPL inhibits the transactivation activity of AR-FL and AR-Vs by reducing the level of pAR S515. Phosphorylation plays an important role in modulating the functional activity of AR. More than 15 distinct phosphorylation sites have been found within AR, most of which localize in the AR-NTD. [39] The phosphorylation of AR has been implicated in various AR activities, including cellular localization, expression, DNA binding, transcriptional activity and stability/ degradation. [40] Phosphorylation of AR at Ser515 was proved to be a key step for accurate transcriptional activation by AR, including the cyclic recruitment of the transcription machinery and turnover of AR itself. [27] Unlike phosphorylation at other sites, phosphorylation of AR at Ser515 is not required for translocation of AR into the nucleus, but specifically affects the ability of AR to bind to its responsive elements. Impaired phosphorylation of AR at Ser515 disrupts AR-mediated transcription. [27] Phosphorylation of AR at Ser515 was also reported to be involved in the epidermal growth factor (EGF)-induced increase in AR transcriptional activity. [41] These studies demonstrated that pAR S515 is important for AR transactivation. In this study, we found that TPL effectively decreases the level of pAR S515, suggesting that TPL suppresses AR transactivation activity by inhibiting the phosphorylation of AR. In addition, the level of pAR S515 was related with EGF-induced PCa cell growth. [41] pAR S515 was also proved to be a potential predictive marker for relapse in PCa. [42] These studies indicated that pAR S515 is related to PCa progression. Hence, the decreased level of pAR S515 may also contribute to the anti-CRPC activity of TPL. The phosphorylation of AR at Ser515 was proved to be mediated by the CDK7 kinase in the TFIIH complex, which is one of the general transcription factors that participates in transcription initiation. [27] TFIIH is composed of two subcomplexes, the core complex which consists of 7 subunits (XPD, XPB, p62, p52, p44 p34 and TTDA) and the cyclin activating kinase-subcomplex (CDK7, MAT1, and cyclin H). [43] Among these subunits, the helicases XPD and XPB uncouple the promoter while CDK7 phosphorylates RNA polymerase II (RNAPII) and transcription factors to initiate transcription. [43] AR interacts strongly with XPB and XPD, and weakly with CDK7, which eventually enhances AR transactivation by phosphorylating AR at S515. [27,43] However, we found that TPL attenuates the level of pAR S515 without impact on CDK7 expression. TPL was reported to activate CDK7, which led to phosphorylation of Rbp1 at S1878, the largest subunit of RNAPII, followed by degradation of Rbp1. [44,45] We found that inhibition of CDK7 with a selective inhibitor BS-181 had a similar effect to TPL on the phosphorylation of AR at Ser515 in PCa cells. This confirms that CDK7 is involved in TPL-induced attenuation of pAR S515. The next question is how TPL decreases the phosphorylation of AR at Ser515 through CDK7. Loss-of-function mutation of XPD has been proved to reduce the level of pAR S515 through CDK7, [27] which suggests that XPD may play a bridging role in CDK7-mediated AR phosphorylation. XPD and XPB are both helicase subunits in TFIIH, so it is possible that XPB may play a similar role to XPD in CDK7-mediated phosphorylation of AR, since XPB also binds directly with CDK7, like XPD. [27] TPL was reported to directly target XPB, leading to inhibition of XPB ATPase activity. [28] We found that knockdown of XPB by RNA interference had a similar effect to TPL, including a decreased level of pAR S515, inhibition of AR transactivation activity and reduced RNA pol II recruitment. These results demonstrated that TPL attenuates AR activity through inhibition of the XPB/CDK7-mediated phosphorylation of AR at Ser515. Nowadays, the second-generation drug enzalutamide is widely used to treat PCa in the clinic with potent treatment effects. However, the development of enzalutamide resistance is a newly encountered problem, which is mainly induced by AR dysfunction. Therapeutic strategies have been investigated in which enzalutamide is used in combination with other drugs. [46] Several compounds were proved to enhance the anti-PCa effect of enzalutamide in vitro or in vivo, such as the RSK inhibitor SL0101, [47] the 5α-reductase inhibitor dutasteride, [48] and AZD5363, [49] which targets the PI3K/Akt pathway. Our data demonstrated that TPL is a potential drug for CRPC therapy. TPL showed effective anti-CRPC activity in PCa cells expressing either AR-FL or AR-Vs by abrogating the XPB/CDK7-mediated phosphorylation of AR at Ser515, which inhibited the transactivation activity of both AR-FL and AR-Vs. Furthermore, TPL showed a synergistic anti-PCa effect with enzalutamide in vitro. TPL also enhanced the anti-PCa effect of enzalutamide in vivo. These data demonstrated that the combined TPL/enzalutamide treatment strategy has potential for CRPC therapy. TPL has been proved to be a promising anti-cancer drug, based on its potent and broad spectrum anti-cancer effects in vitro and in vivo. However, several barriers prevent the clinical application of TPL. Firstly, TPL is water-insoluble, which limits its clinical use. Several water-soluble derivatives of TPL have been developed, such as PG490-88 [50] and Minnelide [51], which can convert to TPL in vivo and retain most of the bioactivity of TPL. PG490-88 and Minnelide showed potent anti-cancer effects and were approved for entry into Phase I clinical trials for prostate cancer [52] and pancreatic cancer, [51] respectively. Secondly, TPL is highly cytotoxic. In traditional Chinese medicine, the "Thunder God Vine", from which TPL is extracted, is thought to be very toxic and has been used as a pesticide although it is mainly used to treat autoimmune and inflammatory diseases. Furthermore, the anti-cancer effect of TPL is attributed to its cytotoxicity. We found that TPL is cytotoxic to both cancer cells and normal cells in vitro, although the cytotoxic effect of TPL on normal cells is less than on cancer cells. [53] The toxicity of TPL is dose-dependent. The distinction between the safe dose and the toxic dose of TPL is very marginal, [54] which leads to a narrow therapeutic window. Therefore, development of TPL derivatives with moderate toxicity is an effective solution. The cytotoxicity of the novel TPL derivative LLDT-8, which was developed to treat rheumatoid arthritis, is 122-fold lower than TPL in cells, [55] yet it still has effective anti-cancer activity. In this study, we found that TPL, even at a concentration as low as 6.25 nM, shows effective anti-CRPC activity and has a synergistic effect with enzalutamide. The low dose of TPL did not cause any obvious adverse effects in the xenograft mouse model. Using a low dose of TPL in clinical applications is therefore another approach to reducing its toxicity. Thirdly, TPL has severe side-effects in animal models and patients, including gastrointestinal disturbances, kidney dysfunction, leucopenia, aplastic anemia and infertility, which limits its clinical application. [23] The adverse reactions of TPL are mainly due to its diverse biological activities. Besides anti-cancer activity, TPL also has anti-inflammatory, immunosuppressive, anti-fertility and anti-cystogenic properties. The relationship between the anti-cancer activity and immunosuppressive activity of TPL is very interesting and needs to be elucidated. Recently, glutriptolide, a new glucose-conjugated TPL derivative which selectively targets cancer cells overexpressing the glucose transporter, was developed. It has greater water solubility than TPL and higher cytotoxicity towards cancer cells than normal cells. [56] Glutriptolide provides a novel example of TPL modification and application. Fourth, the molecular mechanism underlying the anti-cancer effect of TPL is still unclear. Several TPL-binding proteins have been identified, including XPB, polycystin-2 (PC-2), disintegrin and metalloprotease 10 (ADAM10), dCTP pyrophosphatase 1 (DCTPP1) and TAB1. [28,[57][58][59][60] Of these, XPB is thought to be the primary target of TPL and this is consistent with the phenotype of TPL-induced transcription inhibition. However, some genes are up-regulated with TPL treatment. [61] Overexpression of XPB does not totally neutralize the anti-cancer effect of TPL. [24] TPL-induced Rpb1 degradation was proved to be more important than XPB for the anti-cancer activity of TPL. [45] Glutriptolide has no inhibition effect on the activity of XPB in vitro. [56] We have found several new potential TPL-binding proteins using protein chips (data not shown). The anticancer mechanism of TPL should therefore be extensively explored, as this may be helpful for uncovering the anti-cancer mechanism of TPL and for developing potent new TPL derivatives or clinical applications with high therapeutic effect and low side-effects. In summary, although many difficult problems must be solved before TPL can be used for clinical applications, TPL shows potent anti-cancer effects and is a promising anti-cancer drug. Conclusion In summary, our study shows that TPL significantly inhibits the transactivation activity of AR-FL and AR-Vs by disrupting the phosphorylation of AR at Ser515 through XPB/CDK7. Moreover, TPL shows effective anti-PCa activities even at a low dose. TPL also synergizes with enzalutamide in attenuating PCa cell survival in vitro, and enhances the anti-PCa effect of enzalutamide on CRPC xenografts growth in vivo. These results provide a strong rationale for further evaluation of this combination in the clinic.
2018-04-03T04:19:43.536Z
2017-04-20T00:00:00.000
{ "year": 2017, "sha1": "2a0bf45b2e6df2771a887540237545764cefdfde", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.7150/thno.17852", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "028ee7f9c7fda586d7a3149fd8bf188429cd25c2", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
10155691
pes2o/s2orc
v3-fos-license
iStable: off-the-shelf predictor integration for predicting protein stability changes Background Mutation of a single amino acid residue can cause changes in a protein, which could then lead to a loss of protein function. Predicting the protein stability changes can provide several possible candidates for the novel protein designing. Although many prediction tools are available, the conflicting prediction results from different tools could cause confusion to users. Results We proposed an integrated predictor, iStable, with grid computing architecture constructed by using sequence information and prediction results from different element predictors. In the learning model, several machine learning methods were evaluated and adopted the support vector machine as an integrator, while not just choosing the majority answer given by element predictors. Furthermore, the role of the sequence information played was analyzed in our model, and an 11-window size was determined. On the other hand, iStable is available with two different input types: structural and sequential. After training and cross-validation, iStable has better performance than all of the element predictors on several datasets. Under different classifications and conditions for validation, this study has also shown better overall performance in different types of secondary structures, relative solvent accessibility circumstances, protein memberships in different superfamilies, and experimental conditions. Conclusions The trained and validated version of iStable provides an accurate approach for prediction of protein stability changes. iStable is freely available online at: http://predictor.nchu.edu.tw/iStable. Background Protein structure is highly related to protein function. A single mutation on the amino acid residue may cause a severe change in the whole protein structure and thus, lead to disruption of function. A well-known instance is the sickle cell anemia, which is caused by a single mutation from glutamate to valine at the sixth position of the hemoglobin sequence, leading to abnormal polymerization of hemoglobin and distorting the shape of red blood cells [1]; single amino acid mutation could also change the structural stability of a protein by making a smaller free energy change (ΔG, or dG) after folding, while the difference in folding free energy change between wild type and mutant protein (ΔΔG, or ddG) is often considered as an impact factor of protein stability changes [2]. From the viewpoint of protein design, it will be very helpful if researchers could accurately predict changes in protein stability resulting from amino acid mutations without actually doing experiments [3]. If the mechanism by which a single site mutation influences protein stability could be revealed, protein designers might be able to design novel proteins or modify existing enzymes into more efficient, thermal-stable forms, which are ideal for biochemical research and industrial applications in two ways: first, a thermal-stable enzyme could function well in high temperature environment and therefore, reveal higher efficiency due to the relatively higher temperature; second, a structurally stable protein could have longer a half life than relatively unstable ones, meaning a longer usage time, which could economize the use of enzymes. As the data regarding protein stability changes based on residue mutations is collected, a comprehensive and integrated database for protein thermodynamic parameters is built and published. ProTherm is constructed and can be queried by using a web-based interface http://gibk26.bio. kyutech.ac.jp/jouhou/protherm/protherm.html. All the data collected in ProTherm is all validated through actual experiment and collect from published original articles. In this database, researchers access information on the mutant protein, experimental methods and conditions, thermodynamic parameters, and literature information. Due to the richness of data, ProTherm has been a valuable resource for researchers trying to know more about the protein folding mechanism and protein stability changes [4]. In the past decades, many of the available prediction methods designed for predicting protein stability changes. Some of these researched the physical potential [5][6][7], some were based on statistical potentials [6,[8][9][10][11][12][13] and some on empirical approaches that combined physical and statistical potentials to confer how the protein stability would change upon mutations [14][15][16][17][18]; still others were based on machine learning theories, by converting the energy and environment parameters into digital inputs for different methods such as support vector machine, neural network, decision tree and random forest [19][20][21][22][23][24][25][26]. Nowadays, there are many web-based prediction tools available, and each of them has its own capabilities and advantages, although none of them is perfect. As different predictors give conflicting results, it may be difficult for the user to decide which result is correct. An integrated predictor could relieve the user from such dilemma [27]. In this study, we construct an integrated predictor, iStable, which uses a support vector machine (SVM) to predict protein stability changes upon single amino acid residue mutations. Integration of predictors helps to combine results from different predictors and use the power of meta predictions to perform better than any single method alone. Considering the effects of nonlocal interactions, most prediction methods need three-dimensional information on the protein in order to predict stability changes; however, recent research has proven that sequence information can also be used to effectively predict a mutation's effects [9,[19][20][21][22][24][25][26]28,29]. We collected the prediction results from different types of predictors used for constructing iStable by submitting a compiled dataset to them, and applied the sequence information together as inputs for SVM training. When the user submits a new prediction task, iStable will determine whether the mutation is a stabilizing or destabilizing mutant. As previous works have mentioned, correctly predicting the direction of the stability change is more relevant than knowing its magnitude [19,22]. In the construction of iStable, five web-based prediction tools were chosen as element predictors: I-Mutant2.0 [20], MUPRO [22], AUTO-MUTE [30], PoPMuSiC2.0 [31], and CUPSAT [10]. From these predictors, seven models were chosen for in-model training, as described later. During iStable training, we found that the element predictors usually performed well when handling destabilizing mutations, but when it came to stabilizing mutations, the element predictors did not show very satisfying performance, leading to a high specificity combined with a relatively low sensitivity. After training, we designed two different prediction strategies for users that provided two formats of input data. Both showed better prediction performance than all of the other element predictors, which was especially apparent when predicting the effects of stabilizing mutations. Moreover, we undertook various analyses to evaluate iStable in order to make it more precise for user applications. The constructed iStable web-based tool, which provides two strategies for prediction, is available at http://predictor.nchu.edu.tw/iStable/. Compilation of training datasets The compilation of our training dataset can be divided into six steps, which are summarized in Figure 1. Step 1 Collection of training data Two datasets, collected from ProTherm, were used for our model training: the first is Capriotti's training set used for the construction of I-Mutant2.0 (available at http://gpcr2.biocomp.unibo.it/~emidio/I-Mutant2.0/ dbMut3D.html, which includes data from 1948 mutation sites of 58 proteins, and is referred to as dataset S1948 for convenience. The second source is the dataset Dehouck used in training of PoPMuSiC2.0 (available at http://bioinformatics.oxfordjournals.org/cgi/content/full/ btp445/DC1), which includes data from 2648 mutation sites of 119 proteins; this dataset is named S2648 for convenience. Five types of information can be obtained from these two datasets: 1) The ID of the protein corresponds to its protein data bank (PDB) ID, which allows element predictors to obtain 3D information for proteins by getting the structure data (in PDB file format). 2) The site of mutation and the residue site of the native and mutant proteins. 3) The temperature used in the experiment. 4) The pH used in the experiment. 5) The relative stability change of mutant proteins (ddG or ΔΔG), an index of stability change that has been used in previous studies. Step 2 Deletion of redundant data In dataset S1948, many of the mutations share the same PDB IDs and have the same mutation site and ddG value, resulting in redundant data that may lead to biases in training. In addition to these redundant sites, some data still has the same PDB ID and mutation site, with only the pH and temperature differing slightly. We removed the redundant data and named the resulting dataset M1311, as there remained data from 1,311 mutations of 58 proteins. The S2648 dataset shares the same PDB ID and mutation site information as M1311 for 815 mutations; we had to remove this data because we needed an unbiased training dataset. After having removed the redundant data, the remaining dataset was named M1820 and contained data from 1,820 mutations in 119 proteins. Step 3 Definitions of positive and negative data We defined the stabilizing data as positive (+) with a ddG value > 0, and the destabilizing data as negative (-) with a ddG value < 0; this convention for ddG is consistent with I-Mutant2.0 and AUTO-MUTE. PoPMuSiC2.0 uses a different convention for ddG, so we inverted the sign of ddG in M1820. Step 4 Correction of sequence information To make our predictor more adaptable so that it can handle novel protein mutations, we also included sequence data into training datasets M1311 and M1820. The sequence information is presented as a segment of protein sequence centered on the mutated site, with window sizes ranging from 7 to 19 tested separately. Since the position of residues can be expressed as either absolute or relative, directly applying FASTA text will lead to inconsistencies with the training data, which could cause problems when using I-Mutant2.0 and MUPRO. By checking the consistency of the sequence at the mutation site and the latest sequence text manually, we found several differences between relative and absolute positions of sequence first residue in proteins and corrected them to make the attached sequence information consistent with the training dataset; the final integrated dataset was called M3131. The datasets comprise M1311, M1820, and M3131 can be fetched in Additional file 1. Step 5 Classification of secondary structure and relative solvent accessibility Previous studies have mentioned the secondary structure and relative solvent accessibility (RSA) of the mutation site as effective predictors of the accuracy of protein stability-change prediction [22,24]. We analyzed the distribution of data based on the secondary structure and RSA of the mutation site. Secondary structures were classified as helix (α helix), sheet (β sheet), or other (turn and coil). Its range determined the RSA: values between 0% and 20% were classified as "B" (buried), between 20%~50% as "P" (partially buried) and between 50% and 100% as "E" (exposed). This RSA classification is based upon those used in previous studies [24,30]. Step 6 Categorization of proteins The motivation for predicting protein stability changes is to find a mechanism to modify existing enzymes into more stable forms. We accessed the PDB to determine which superfamilies the proteins in the training dataset belonged to and found three major categories: enzymes, nucleic acid binding proteins, and protein-protein interaction related (ubiquitin-related, for example). The dataset can be fetched in Additional file 2. Element predictors Five element predictors were chosen: 1. I-Mutant2.0 adopts an SVM model to approximate the ddG value of the protein and predicts the direction of stability change. Both sequence (I-Mutant_SEQ) and structure (I-Mutant_PDB) information is used in iStable construction. 2. AUTO-MUTE computes the environmental disturbance caused by a single amino acid replacement. From the four models of prediction available in AUTO-MUTE, we chose the random forest (RF) (AUTO-MUTE_RF) and support vector machine (AUTO-MUTE_SVM) strategies for our model construction. 3. MUPRO adopts an SVM model to predict stability changes due to single-site mutations, primarily from sequential information, along with the use of optionally provided structural information. The result predicts only whether the change will lead to destabilization or not, without providing an actual ddG value. During the construction of iStable, we found that the regression task and the neural network approaches were broken. We used the SVM model (MUPRO_SVM) as an element predictor. 4. PoPMuSiC2.0 applies an energy-based function and uses the volume change of a protein upon single amino acid mutation to predict the stability change. 5. CUPSAT predicts protein stability changes using structural environment-specific atom potentials and torsion angle potentials. The user can submit predictions by typing in the PDB ID or uploading a custom PDB file. Summaries of the element predictors are given in Table 1. Obtaining prediction results from element predictors When using I-Mutant2.0, in addition to the PDB ID, the sequential strategy (I-Mutant_SEQ) was also applied, by choosing the direction-deciding prediction strategy; in the output form, we extracted the stability-change direction. When submitting to AUTO-MUTE, we entered the PDB ID, mutation, temperature, pH value, and chain code (if available). The prediction results using RF and SVM were collected separately; we extracted the direction of stability change (decreased/increased) in the output form. Since MUPRO uses protein sequence as its input information, we obtained the sequence from a FASTA file downloaded beforehand and then pasted the sequence into the input form and designated the site of mutation and the mutated amino acid code. The output form gives the user three types of prediction results, and we took all of them into consideration. For some reason, the regression and neural network models in the website did not work when constructing iStable; the regression model always gave a result of "INCREASE", and the neural network predictor always gave "DECREASE" as a result. Presently, only the SVM strategy is applied in the construction of iStable. PoPMuSiC2.0 accepts PDB ID, chain code (if available), and site information as input data; the predicted ddG is then extracted. CUPSAT accepts either the PDB ID or the PDB file format in order to predict changes in stability, and we chose to use the uploaded PDB file. We obtained the secondary structure, the relative solvent accessibility of the mutated site, and the predicted ddG value. All the work described was completed with Java program. Encoding schemes of support vector machine After compared witch various algorithms, SVM was selected as the learning model for iStable, protein stability changes upon mutation can be predicted by using structural and sequential information, as in previous studies. In our research, we used the prediction results from the element predictors as input data with local sequence information included. The SVM converted the data into a multi-dimension vector. After distributing the data into multi-dimension space, the SVM determined a hyperplane used to split the data into different groups. The trained integrated predictor iStable uses SVM to predict the direction of stability change of the protein input data, that is, to determine whether the target is a stabilizing or a destabilizing mutant. In this work, we used LIBSVM (Library for support vector machines) 2.89 [32] to achieve the SVMs implemented in this study, and the kernel adopted the radial basis function (RBF). While training, two crucial parameters were tuned to optimize the performance of prediction, the kernel parameter γ and the penalty parameter C. The value of γ and C were tuned to 0.03125 and 2, separately. When encoding our training data into the form used by the SVM, the input data was constructed using two schemes: sequence scheme and website results scheme. In the sequence scheme, we converted sequences into several sets of 21-symbol coded input, namely, the 20 amino acid codes and an extra input representing the end-flanking fragment (ex: "-"DCAMYW); one set of the 21 inputs was used to represent the mutant residue after the mutation; the sequence scheme had (21 × ("window size"+1)) inputs altogether. The website result scheme had seven sets of input (I-Mutant_PDB, I-Mutant_SEQ, AUTO-MUTE_RF, AUTO-MUTE_SVM, MU-PRO_SVM, PoPMuSiC2.0 and CUPSAT) representing the prediction results of element predictors, each shown as a set of three inputs, with destabilizing results represented as "1-0-0" and stabilizing results represented as "0-0-1". As some prediction queries were not accessible to a specific site, we recorded this type of result as a null prediction, represented as "0-1-0". The trained predictor was evaluated with 5-fold crossvalidation as the training dataset was split into five groups, with four groups used as training sets and one as the testing set by turns. After iStable was constructed using all of the schemes, we designed another model of predictor integration, named iStable_SEQ, primarily for users handling protein sequences where no PDB ID is available. The iStable_ SEQ model was constructed using a sequence scheme and using only the results of I-Mutant_SEQ and MUPRO_SVM of the website scheme, both of which use protein sequences as their inputs for prediction queries. The iStable_SEQ was also trained and validated with 5-fold cross-validation. Figure 2 is a brief introduction to iStable's grid computing architecture. The predictor can be divided into three different layers -predictor layer, coordinator layer, and data visualization layer. Predictor layer It is the source of data needed for data integration, which, in this article, refers to the element predictors Data visualization layer It is the layer to present a graphical user interface (GUI) and output the prediction result, which can be divided into two modules: A. GUI: Through the use of a JSP website and Java-Script, it provides users with an interface for inputs and results in webpage form. B. Result visualization: A Java program, responsible for integrating the prediction result and adding webpage tags for result output. Coordinator layer It is the coordinator between the predictor and data visualization layers. As users input parameters through the visualization layer GUI, the coordinator layer can receive the parameters and send them to the predictor layer at the same time. It can then receive results from the predictor layer to complete the prediction of stability change. The coordinator layer can be divided into three modules: A. Prediction: executes prediction mechanism using the SVM method described before. B. Repository: deposits the prediction results from the element predictors. C. I/O Dispatcher: responsible for sequential actions after receiving parameters from users; collects results from element predictors, deposits data, and coordinates the prediction work. Figure 3 is a visualized presentation of iStable prediction work. When a user inputs a query with protein mutant information, the program first accesses the PDB and gets the structure data and the amino acid sequence. After structural and sequential information is gathered, the program get an 11-amino acid residue sequence window centered on the mutated site, converts it into 11 sets of sequential code with 21 inputs, and the mutant residue is converted into an extra set of sequential code. Meanwhile, the structural (PDB code and PDB file) and sequential (FASTA sequence) information is used to submit the prediction query to get prediction results from seven element Figure 2 Grid computing architecture of iStable. When a user input the mutant protein's information through graphical user interface, the input/output dispatcher will pass the relative information to element predictors. After the results from predictors are collected into repository module, prediction layer will active the prediction program and the output result will be send to data visualization layer through input/output dispatcher, finally the integrated result will be presented to the user. Prediction progress of iStable predictors, which are then converted into seven sets of 3-input website result schemes. After both parts of SVM input are converted, the support vector machine processes and gives out a prediction result as an output of iStable. Performance assessment Correct predictions of positive and negative data have different meanings because the effects of mutation are not always detrimental to protein function. One of the purposes of predicting protein stability change is to identify mechanisms of structural stability change upon single amino acid mutation; another goal is to apply this knowledge to protein design in order to modify protein into more stable and thermal-tolerant forms. Since it is equally important to understand the mechanisms underlying stabilizing and destabilizing mutations, we expect an integrated predictor to make correct predictions in both cases. Since the minority result could be the right answer, we want to prove that iStable, with SVM training, would know right from wrong and not just pick the majority answer. In addition, Accuracy (Acc), sensitivity (Sn), specificity (Sp), and the Matthews correlation coefficient (MCC) were used to evaluate the predictive ability of each system. Four measures were defined: where TP, FP, FN and TN are true positives, false positives, false negatives, and true negatives, respectively. Sn and Sp represent the rate of true positives and true negatives respectively. Acc is the overall accuracy of prediction. Additionally, MCC is a measure of the quality of the classifications, and the value may range between -1 (an inverse prediction) and +1 (a perfect prediction), with 0 denoting a random prediction. Results and discussion Performance on the M1311, M1820 and M3131 datasets After construction of the integrated predictor iStable, we first compared the performances of iStable and the Tables 2 and 3. In both datasets, iStable showed obvious improvement in sensitivity, accuracy and MCC. The performance using dataset M1820 is worth mentioning. While other predictors have shown sensitivity values that average lower than 0.370 and MCC values lower than 0.352, iStable reached a sensitivity score of 0.456 and a MCC score of 0.402. During our observations, we found that the element predictors made many more "negative" predictions than "positive" ones, leading to high specificity, but universally low sensitivity for the element predictors. Based on the objective, we wanted to construct a predictor that could perform well using both positive and negative data. The MCC values show that iStable has the best overall performance on M1311; the results obtained from M1820 show that the performances of the element predictors are lower than those in M1311, especially in the case of I-Mutant2.0, AUTO-MUTE and MUPRO. This may be related to the training datasets used in their construction; the training data for MUPRO was extracted from Capriotti's training set S1615 for neural networks, and AUTO-MUTE's training data was extracted and edited from S1948, originally the same as that of I-Mutant2.0. As the M1311 dataset is similar to their training dataset, the three element predictors showed performances consistent with those from their training. The performances using the dataset M1820 indicate that these three element predictors might have relatively lower performances when using new data not employed during previous training. Consistent with the fact that the M1820 dataset was extracted from PoPMuSiC2.0's training data M2648, we observed the performance of PoPMuSiC2.0, when using M1820, to be much better than with M1311. We tried different dataset sources, and iStable showed better prediction performance than every other element predictor. When using the same training data, iStable still showed obvious improvements in performance, especially with stabilizing mutants. After comparing the performances of iStable and the element predictors on two datasets, we wanted to prove that training iStable with large amounts of data would give the integrated predictor a stronger capacity to deal with new data. We checked the performances of all the predictors with the mixed dataset M3131, which is shown in Table 4. We see that the specificity of iStable is sometimes lower than several of the element predictors; however, the overall performance is still better than the element predictors. Through Table 4, we can see that the integrated predictor iStable showed obviously improved performance with positive data, with the highest sensitivity among all of the predictors. To validate iStable and compare it with other combination methods, i.e., radial basis function network (RBFN), random forest (RF), neural networks (NN), Bayesian network (BN), and majority voting (MV) [33] with respect to predicting protein stability changes in dataset M3131 ( Table 5). The MCC of iStable, RF, and NN are all over 0.6; the MCC of BN and MV are both between 0.5 and 0.6; however, the MCC of RBFN is below 0.5. Sn and Sp in our study are both not the highest score to other combination methods; even so, iStable showed the best performance of overall evaluation to integrate off-the-shelf predictors for protein stability changes. iStable was also trained and validated, using support vector regression, to predict the value of free energy stability change by integrating the ddG value fetched from I-Mutant_PDB, AUTO-MUTE, PoPMuSiC, and CUPSAT. The correlation between the predicted and the observed ddG is 0.86, with a standard error of 1.5 kcal/mol, when the method is structure based (Figure 4). On the other hand, only I-Mutant_SEQ provides the predicted ddG value in sequence based; therefore, iStable_SEQ just shows the ddG value generated by I-Mutant_SEQ. Evaluation of sequence scheme After comparing the performances of iStable and the element predictors with the integrated data-set M3131 in order to validate the actual effects of using the sequence scheme, we assessed the performance of the integrated predictor using different combinations: 1) sequence and website results (same as above); 2) SVM using only the results from element predictors; 3) SVM using sequence and website results, without using AUTO-MUTE_RF, the predictor with the best performance among element predictors, but also the slowest to finish the prediction task; and 4) SVM using website results only, without AUTO-MUTE_RF. The purpose of checking the third and fourth strategies was to determine the power of the sequence scheme. Since AUTO-MUTE_RF is the only element predictor with an MCC value over 50%, we wanted to see if the integrated predictor would continue having an MCC value over 60% or would it drop significantly without the use of sequential information by dropping AUTO-MUTE_RF. The result is shown in Table 6. Combination 1, the same as shown before, performed better than combination 2, which uses only website results as SVM inputs, indicating that the addition of sequential information could provide increased power when the element predictors are not accurate enough to produce accurate results. On the other hand, combination 3 performed much better than combination 4 without using AUTO-MUTE_RF; this reveals the power of the sequence scheme: while the six element predictors could only achieve an MCC value of less than 0.5, with the use of the sequence scheme, the integrated predictor could achieve an MCC value of 0.622, an obvious improvement. Performance of the iStable_SEQ strategy with M3131 For users with novel proteins that lack available structural information, iStable provides a prediction strategy that takes amino acid sequences as inputs. The prediction result is presented in Table 7. By integrating the results of the sequential prediction models of I-Mutant2.0 and MUPRO with an extra sequential scheme, the iStable_SEQ model showed a performance noticeably higher than the two models we used. Structural analysis of predictors' performances As mentioned, the secondary structure and RSA of the mutated site could influence the predictor's performance. Therefore, we analyzed the performance of iStable with mutations within different secondary structures and RSA ranges, and compared the results with the element predictors used. The results obtained from different kinds of mutants are presented in Tables 8 and 9. With respect to secondary structure, iStable showed the best prediction performance among all the predictors; for some reason, the performance of iStable in the case of mutants with secondary structures "other" than helixes and sheets was relatively lower than in the presence of these two structures; this may be due to the irregular structures of loops and turns. Performance with β sheets showed a higher MCC than with helix and coil/turn structures, which is consistent with previous research [24]. This may be caused by the presence of residues in β-strand segments that are close in space, but far away in sequence [34]. When analyzing the performance of iStable for different RSA ranges, we found that iStable performs best in buried (63.4%), partially buried (68.4%) and exposed (71.2%) regions. Among the three ranges of RSA, iStable showed the high performance in partially buried region (68.4%), which is consistent with Dr. Gromiha's previous research [35]; the sequence and structure information of partially buried mutations were very important for predicting stability changes, but did not very high correlation for buried mutations. On the other hand, Dr. Gromiha indicated buried mutation within β-strand segments correlated better than did those in α-helical segments; iStable, therefore, brought higher sensitivity than other element predictors at buried mutations. The influence of window size on predictor performance In previous research on constructing novel predictors, investigators have tried different lengths of protein sequence centered on the mutated site. MUPRO chose 7 as the best window size, while I-Mutant2.0 chose 19. We compared the performances of iStable with different window sizes using the sequence scheme. The result of the comparison is shown in Figure 5. As shown, a window size of 11 amino acids centered on the mutated site performed best in terms of both accuracy (85.7%) and MCC (66.9%). Based on this comparison, a window size 11 was selected for use in the sequence scheme of iStable. Performance with different protein superfamilies and experimental conditions Protein structure is closely related to function, and alteration of protein structure as the result of mutation may lead to disruption of biological function. We classified the proteins in our training dataset into their corresponding superfamilies, as previously mentioned. We chose three major categories (enzymes, DNA/RNA binding proteins, and protein-protein interaction-related proteins) of protein superfamilies to determine how iStable would perform in terms of prediction ability when the training dataset is limited. We used the three categories as independent training sets for iStable training. Each set was split into five subsets and used in 5-fold cross-validation for iStable. The performance results with the three categories of proteins are shown in Table 10. As shown, iStable performs better than any of the element predictors for the three different categories of proteins. In the enzyme and protein-protein interaction categories, with limited data availability, iStable did not perform as well as with the M3131-trained model, but in the nucleic acid binding protein category, iStable showed an obvious performance improvement that was clearly superior to the element predictors. In this case, although the performance of iStable is limited by the prediction power of the element predictors, we still demonstrated that the combination of sequence and website result schemes could provide noticeable improvements in prediction performance. We observed the performance of each predictor under a variety of pH and temperature ranges. Table 11 was shown that iStable and AUTO-MUTE_RF have better performance than other element predictors when pH < = 6 or pH > 8. These two predictors have similar performance, however, iStable have more excellent accuracy than AUTO-MUTE_RF in the condition of temperature < = 37. Finally, it is worth mentioning that iStable is the best choice predictor for predicting protein stability changes when pH between 6 and 8. Conclusions The power of the integrated predictor Compared with various machine learning methods and element predictors, iStable successfully integrated sequence and website result scheme to promote the predictive performance of protein stability changes. When synergistic method was taken, we should consider some issues; 1) the input and output format are not all the same from different element predictors; 2) the evaluation of the prediction results of each element predictor; and 3) the improvement of the overall performance of synergistic systems. Majority voting model is one kind of popular synergistic method, which is the frequently strategy adopted by biologists when they must to obtain the answer from a lot of prediction tools. However, the prediction performance of the element predictor of AUTO-MUTE_RF and iStable are much better than majority voting with the above 50% MCC in our study, which because majority voting does not take into account confidence measure in the prediction results from different element predictor. Besides, iStable is a prediction system based on the synergistic method and constructed according to the grid computing architecture; therefore, iStable has the properties of software reusability and computing resources reduction. On the other hand, the sequence scheme provides the information of local interaction; however, website result scheme also includes the non-local interaction information by the element predictors of PopMuSiC2.0 with the folding free energy changes and CUPSAT with atom potentials. Only considered sequence as input that caused iStable_SEQ does not include non-local information; furthermore, just two element predictors can be adopted, therefore, the prediction performance of iStable_SEQ is less than the that of iStable at least 10% of MCC. Prediction tool available on website The trained predictor iStable is available at http://predictor. nchu.edu.tw/iStable/. Users can access two models of prediction: iStable and iStable_SEQ. For predicting mutations in proteins with available 3-D structure information in the PDB, users can input the PDB ID to apply the iStable model. If the user has proteins they interested in that have an available sequence, but are not available in PDB for their structure information, the iStable_SEQ model would be the ideal choice for them.
2015-07-06T21:03:06.000Z
2013-01-01T00:00:00.000
{ "year": 2013, "sha1": "71b955968a9d548ee5b645d1cfbccb6e24a7380f", "oa_license": "CCBY", "oa_url": "https://bmcbioinformatics.biomedcentral.com/track/pdf/10.1186/1471-2105-14-S2-S5", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "71b955968a9d548ee5b645d1cfbccb6e24a7380f", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine", "Computer Science" ] }
255973222
pes2o/s2orc
v3-fos-license
Chromatin and noncoding RNA-mediated mechanisms of gastric tumorigenesis Gastric cancer (GC) is one of the most common and deadly cancers in the world. It is a multifactorial disease highly influenced by environmental factors, which include radiation, smoking, diet, and infectious pathogens. Accumulating evidence suggests that epigenetic regulators are frequently altered in GC, playing critical roles in gastric tumorigenesis. Epigenetic regulation involves DNA methylation, histone modification, and noncoding RNAs. While it is known that environmental factors cause widespread alterations in DNA methylation, promoting carcinogenesis, the chromatin- and noncoding RNA-mediated mechanisms of gastric tumorigenesis are still poorly understood. In this review, we focus on discussing recent discoveries addressing the roles of histone modifiers and noncoding RNAs and the mechanisms of their interactions in gastric tumorigenesis. A better understanding of epigenetic regulation would likely facilitate the development of novel therapeutic approaches targeting specific epigenetic regulators in GC. INTRODUCTION Gastric cancer (GC) is the fifth most common cancer and the fourth deadliest cancer in the world 1 . GC shows distinct incidence patterns across geographical regions: most gastric cancer cases are found in Eastern Asia and Europe, while it is the most common cancer in some of Middle Eastern countries. Although the overall incidence and mortality have decreased in most countries, likely due to better prevention and improved food preservation, increasing incidence rates among young adult populations have been observed in some countries, such as the UK and Sweden 2 . Notably, the treatment of advanced or metastatic GC, with a median overall survival of less than one year, has been extremely challenging 3 . Therefore, a better understanding of the disease mechanisms would likely help design more effective therapies. Consistent with its distinct geographical distribution patterns, GC is a highly multifactorial disease regulated by genetic, epigenetic, and environmental factors. Common genetic factors include mutations in oncogenes and tumor suppressor genes involved in cancer initiation and progression 4 . The Cancer Genome Atlas (TCGA) project has identified common mutations and categorized GC into four major subtypes: Epstein-Barr virus (EBV)-positive, microsatellite instability (MSI), genomically stable (GS), and chromosomal instability (CIN) 5 . Epigenetic factors are involved in the regulation of gene expression via mechanisms unrelated to the genetic sequence, which include DNA methylation, histone modification, and noncoding RNAs (ncRNAs) 6 . Interestingly, analysis of tumor suppressors and oncogenes has shown that changes in DNA methylation are more common than genetic mutations in GC, highlighting the importance of epigenetic regulation in gastric tumorigenesis 7 . Corroborating these data, another study showed that the impact of DNA methylation outweighs that of genetic mutations in GC when compared to esophageal cancer 8 . Similarly, environmental factors, most notably diet and infection, play major roles in gastric carcinogenesis. In fact, GC is strongly associated with two infectious agents categorized as type I carcinogens, H. pylori and EBV [9][10][11][12][13] . Environmental and epigenetic factors are interconnected, as both diet and infection have been found to influence DNA methylation 14,15 . While DNA methylation in GC has been extensively reviewed 6,[16][17][18] , other types of epigenetic regulation in GC are still poorly understood. As newly accumulating evidence suggests the critical roles of histone modification and ncRNAs in GC tumorigenesis, we discuss recent studies demonstrating their roles and mechanistic interactions. HISTONE MODIFICATION AND THE CHROMATIN LANDSCAPE IN GASTRIC CANCER DNA is wrapped around octomeric histone cores to form nucleosomes, which are organized into chromatin. The organization of chromatin into accessible/active euchromatin or condensed/repressed heterochromatin directly influences gene expression, and it is tightly regulated by covalent modifications of the histone tails via methylation, acetylation and phosphorylation 19 . These modifications are reversible and are catalyzed by specialized "writer" and "eraser" proteins. For example, writers such as histone methyltransferases (HMTs) and histone acetyltransferases (HATs) deposit methyl and acetyl marks on histone tails, respectively. In contrast, erasers such as histone demethylases and histone deacetylases (HDACs) remove the respective marks 20 . Other types of modifications, such as ubiquitination, sumoylation, and GlcNAcylation for histones, are also possible, but they are much less studied. Writers and erasers also have specificity for the histone marks they regulate. For example, the catalytic subunit of the polycomb repressive complex 2 (PRC2), EZH2, catalyzes the methylation of histone H3 lysine 27 (H3K27), while H3K4 methylation is catalyzed by the complex proteins associated with the Set1 (COMPASS) complex. Similarly, histone lysine demethylase (KDM)6A/B/C removes methylation on H3K27, while LSD1/KDM1A demethylates H3K4me 20 . The combination of histone modifications represents a code correlated with functional elements on the chromatin. For example, H3K4 methylation is associated with active elements: H3K4me3 and H3K4me1 are associated with enhancers and promoters, respectively 21,22 . Histone modifications can influence gene expression. For example, H3K27ac and H3K9ac are associated with active gene expression, while H3K27me3 is associated with transcriptional repression. The specific methylation state of the same amino acid residue can also function differently: H3K9me1 is associated with active promoters, while H3K9me2/3 is associated with promoter repression 23 . Once histone marks are established, they can be recognized by "readers", which are proteins that can interact with specific histone marks and exert a myriad of effects 20 . In this review, we will focus on ATPase-dependent chromatin remodeling complexes, which contain reader subunits and can directly influence the chromatin state by modifying either accessibility or nucleosome composition 24,25 . Alterations of histone modifiers establish the histone code underlying gastric carcinogenesis Together, writer and eraser proteins establish the histone code, influencing the chromatin landscape and regulating a multitude of signaling pathways and genes. Unsurprisingly, many of these proteins play important roles in oncogenesis and tumor suppression. Since dysregulation of writer and eraser proteins in GC has been previously discussed 17,26,27 , here, we focus on the recent discoveries addressing their roles in GC. Histone methyltransferases. Analysis of TCGA data showed that a number of histone modifiers are differentially expressed between GC and noncancer samples 28 . A more recent analysis of TCGA data focusing on HMTs found that the increased expression of G9a/ EHMT2, an H3K9 methyltransferase, is associated with poor prognosis, while its oncogenic effect may be mediated by activating the expression of MTOR via increasing promoter H3K9me1 [29][30][31] . PRC2, which catalyzes H3K27me3, is also overexpressed in GC, and this overexpression is associated with poor prognosis 32 . Recent studies have demonstrated that PRC2 may promote GC development via multiple pathways. Xing et al. found that EZH2 represses EBF1 expression by increasing promoter H3K27me, which subsequently activates the expression of TERT 33 , while another study showed that EZH2 may promote cancer via the downregulation of PTEN 34 . EZH2 was also found to induce H3K27me3 at the promoter of P21, a cell cycle regulator, and downregulate its expression 35 . In addition, EZH2 repressed the expression of ERRγ, a nuclear hormone receptor and a transcription factor, via H3K27me3, which led to the activation of an oncogene, FOXM1, in GC cells 36 . Other HMTs have also been studied in recent years. For example, WDR5 is a subunit of the KMT2/MLL complex that catalyzes H3K4me1 and H3K4me2, and its abnormal expression is associated with poor prognosis. Mechanistically, it increased H3K4me3 at the promoter of CYCLIN D1, a cell cycle gene frequently upregulated in human cancer, and activated its expression 37 . KMT2 family genes were found to be mutated in approximately 10% of gastric cancer cases, and they were associated with PD-L1 expression, suggesting their potential roles in immune checkpoint therapy 38 . SETD1A, another HMT for H3K4, was overexpressed and localized to the promoter of SNAIL, a snail family transcriptional repressor, enhancing epithelial-mesenchymal transition (EMT) in GC cells 39 . In addition, the increased expression levels of DOT1L, an H3K79 methyltransferase, and protein arginine methyltransferases, such as PRM5 and PRMT6, were also associated with poor prognosis 40-42 . Together, these recent studies demonstrate that writer proteins regulate histone modification across the GC genome and that their increased expression likely leads to the dysregulation of multiple genes and signaling pathways, promoting cancer development. Histone demethylases. LSD1, the first histone demethylase identified, has been shown to act on both H3K4 and H3K9, thus having the potential to both repress and activate target gene expression. LSD1 was found to be more highly expressed in GC than in normal tissue, and its increased expression was associated with metastasis and advanced cancer stages 43,44 . Mechanistically, it may promote EMT through the demethylation of promoter H3K4me2 and the subsequent downregulation of CDH1 (E-cadherin) expression 44 . Histone deacetylases, acetyltransferases, and kinases. When the expression levels of histone deacetylases and kinases were analyzed, the upregulation of HDAC2, NEK6, and AURKA was identified in GC 45,46 . Corroborating these findings, a recent study showed that HDAC2 may mediate the deacetylation of H3K9 at the P21 promoter, leading to increased proliferation in GC cells 47 . In addition, the expression of the HATs P300 and CBP was found to be upregulated in GC cells 48 . Together, these recent studies have shown that altered expression levels of histone modifiers are associated with gastric tumorigenesis, leading to cancer progression through the regulation of oncogene and tumor suppressor gene expression. Writers, erasers, and readers play a critical role in maintaining the proper chromatin state and gene expression for normal homeostasis. Mutations or altered expression levels of writers, erasers, and readers can lead to dysregulation of key signaling pathways and/or genes involved in gastric tumorigenesis (Fig. 1). The chromatin landscape, as defined by histone marks, is significantly remodeled during carcinogenesis. Therefore, new studies may benefit from genomic analyses of chromatin marks that are altered by the gain or loss of chromatin modifying factors. In addition, as most recent studies have utilized cancer cell lines, in vivo animal models or organs-on-a-chip combined with intact niche components would likely provide important, new insight into tumor heterogeneity and the microenvironment. Alterations in chromatin remodeling complexes play functional roles in gastric carcinogenesis One way in which the histone code can direct functional rearrangement of chromatin is via the recruitment of chromatin remodeling complexes. Currently, as defined by their core ATPase subunits, four subfamilies of chromatin remodeling complexes exist: SWI/SNF, INO80, CHD, and ISWI 25 . Of note, in each of these complexes, genetic alterations and changes in gene expression have been linked to GC development. CHD. The chromodomain helicase DNA-binding (CHD) subfamily of chromatin remodeling factors is so named because of the presence of the chromodomains that interact with methylated histones. The expression of CHDs has been examined in human GC cases. Notably, mutations and reduced expression of CHD4 and CHD8 were found in microsatellite instability (MSI)-high GC 49 . Consistently, knockdown of CHD8 showed increased proliferation of GC cells 50 . The reduced expression of CHD5 also correlated with poor prognosis and invasion 51 . CHDs can interact with other epigenetic regulators, such as MBDs and MTAs, to form the NuRD complex, and subunits of this complex are involved in GC development. MBD2 was found to be downregulated in gastric preneoplastic lesions and tumors 52 . In contrast, the increased expression levels of MTA1 and MTA2 were associated with GC invasion and metastasis [53][54][55] . Indeed, the overexpression of MTA2 promoted colony formation in GC cells, potentially via the upregulation of IL-11 56 . Interestingly, increased expression of MTA3 was also found in GC compared to normal tissue. These studies highlight the importance of the subunit composition of the NuRD complex in gastric tumorigenesis 57 . SWI/SNF. Unlike CHD, the SWI/SNF complexes contain subunits harboring bromodomains, which recognize acetylated histones. Specifically, the SWI/SNF localizes to H3K27ac-labeled active enhancers to facilitate gene expression 58 . This can be accomplished via an increase in the accessibility to transcription factors, as the SWI/SNF complex has the ability to mobilize and shift nucleosomes 59,60 . Multiple subunits of the SWI/SNF complex have been associated with gastric cancer development [61][62][63][64] . Most notably, unbiased genome and exome sequencing studies have identified ARID1A, a gene encoding the DNA interacting subunit of the SWI/SNF complex 65 , as the second most frequently mutated gene after TP53 in GC 5,66,67 . Several clinical studies have shown that ARID1A expression negatively correlates with increased invasion and metastasis and a worse prognosis, further supporting its role [68][69][70] . Cell culture studies using GC cell lines have shown that ARID1A loss promotes GC cell proliferation and migration [71][72][73][74] . Consistent with these data, we generated novel gastric tumor mouse models and demonstrated a tumor suppressor role of Arid1a in vivo 75 . Unlike ARID1A, the increased expression of BRG1, the ATPase subunit of the SWI/SNF complex, appeared to correlate with gastric cancer development and metastasis 76 . Consistently, stabilization of BRG1 promoted EMT via the upregulation of SNAIL 77,78 . Interestingly, the expression of its mutually exclusive subunit, BRM, was lost during gastric carcinogenesis, highlighting the complex interactions between different subunits of the same chromatin remodeling complex. In addition to its role as a chromatin remodeling complex, the loss of SWI/SNF subunits led to the global remodeling of the H3K27ac landscape, suggesting that it can regulate H3K27ac 58,79,80 . This function of the SWI/SNF complex may be facilitated by its association with histone acetyltransferases and HDACs 81 . Alternatively, the SWI/SNF complex may increase chromatin accessibility for other HATs to deposit H3K27ac. Corroborating these data, our H3K27ac analysis of mice with the heterozygous deletion of Arid1a showed that the loss of the H3K27ac + enhancers was associated with the p53 and apoptosis pathway genes, followed by the corresponding reduction of their expression 75 . This study demonstrates that Arid1a is required for the maintenance of active enhancers in genes involved in the tumor suppressor pathway, providing new mechanistic insight into the epigenetic regulation of GC (Fig. 2). Given that the SWI/SNF and PRC2 complexes may regulate a common substrate such as H3K27, they may interact to control gene expression. Indeed, their antagonistic relationship has been The chromatin remodeling complex, SWI/SNF, is known to recognize histone acetylation via its reader subunit. The complex utilizes ATP-dependent helicase activity to slide nucleosomes and increase chromatin accessibility. This allows other histone acetyltransferases (HATs) to maintain h3K27ac, a histone modification associated with active gene expression. The loss of one copy of Arid1a in gastric tumors results in loss of H3K27ac at the enhancer and subsequent downregulation of p53 target genes followed by suppression of apoptosis, leading to tumor progression. demonstrated in multiple systems [82][83][84] . Furthermore, EZH2 inhibition has been shown to provide a specific vulnerability for ARID1Amutated ovarian and gastric cancers, further supporting the interactions between writers and readers in cancer 84,85 . ISWI and INO80. Much less is known about the ISWI and INO80 complexes in the context of GC. The ISWI subfamily contains a few complexes, including RSF and NURF complexes. The high expression levels of RSF-1 (part of the RSF complex) were associated with poor prognosis in GC 86 , and knockdown of the NURF complex subunits showed tumor suppressive effects in GC cells, further supporting their roles in GC 87 . While the role of the INO80 complex in gastric carcinogenesis is relatively unexplored, it contains YY1 as a subunit 88 , a transcription factor known to activate multiple oncogenic pathways in GC [89][90][91][92][93][94] . However, whether this activation is mediated by INO80 has yet to be elucidated. These recent studies have shown that mutations in chromatin remodeling complexes and/or their altered expression levels promote gastric carcinogenesis by abnormal activation or inhibition of oncogenic and tumor-suppressive pathways. These chromatin complexes also interact with writers and readers to establish and maintain the oncogenic histone code. The oncogenic chromatin landscape in gastric cancer Since histone modification is closely tied to transcriptional regulation, it acts as a common mechanism for the dysregulation of gene expression during cancer development. The gain of active histone marks at the promoters/enhancers of oncogenes can lead to their increased expression, while the gain of repressive marks at the promoters/enhancers of tumor suppressors can lead to their reduced expression. These changes are often mediated by alterations in the expression or localization of writer proteins 6 . Here, we discuss recent studies providing new insight into how the oncogenic chromatin landscape influences GC through the regulation of promoters and enhancers. With the advent of next-generation sequencings, such as chromatin immunoprecipitation sequencing (ChIP-seq), scientists have been able to investigate genome-wide changes in histone modification during gastric carcinogenesis. Using Nano-ChIP-seq for H3K4me1 and H3K4me3 (enhancer-and promoter-associated histone marks, respectively) and H3K27ac (active histone mark), Muratani et al. identified hundreds of H3K4me3 + promoters and H3K4me1 + enhancers whose activity differs between GC and normal tissue 95 . They defined the promoters that are gained in cancer, demonstrating that genes associated with these promoters are enriched in cancer-related gene sets and have prognostic values. This study suggested that the activation of promoters may be a critical driver of carcinogenesis. In addition, by integrating the chromatin profiles from ENCODE, the authors also found an enrichment of the PRC2 subunits EZH2 and SUZ12 in the promoters that are gained/lost in cancer, highlighting the role of PRC2 in maintaining the cancer chromatin landscape. Muratani et al. also identified the presence of alternative promoters in GC 95 . These promoters, not located at their canonical transcriptional start sites (TSSs), drive the expression of noncanonical transcripts and protein isoforms, which could lead to the activation of oncogenic pathways. Another Nano-ChIP-seq study by the same group further expanded on the mechanisms of alternative promoters. They found that the use of alternative promoters in GC led to an overall decrease in the expression of peptides having a strong affinity for MHC class 1 molecule, thus leading to decreased antigen presentation and potential immune evasion 96 . Recently, Sundar et al. performed single-cell RNA-seq in gastric tumors and integrated the results of a previous study to stratify the tumors based on alternate promoter usage. They found that tumors with low alternative promoter usage contain more infiltrating T cells than tumors with high alternative promoter usage, supporting the role of alternative promoters in immune evasion 97 . The presence of these alternative promoters has subsequently been confirmed across a number of cancer types in a recent large-scale RNA-seq study, suggesting that they represent one of several new mechanisms underlying dysregulated gene expression during carcinogenesis 98 . A more recent study utilized long-read RNA-seq with multiple GC cell lines and showed that noncanonical transcripts from alternative promoters may also have different coding sequences and 3' UTRs than canonical transcripts. Furthermore, the authors found that the use of alternative promoters for specific cancer genes such as ARID1A is associated with different survival outcomes and molecular subtypes in GC 99 , suggesting that alternative promoters can be utilized to alter the activity of key readers involved in gastric carcinogenesis (Fig. 3). In addition to this promoter regulation, enhancers also play a key role in GC development. Ooi et al. analyzed superenhancers, which are large enhancer regions characterized by high levels of active chromatin marks and coactivator binding. They found that superenhancers gained in GC compared to normal tissue are enriched in genes related to important cancer hallmarks, including invasion, angiogenesis, and resistance to cell death 100 . Another Fig. 3 Alternative promoters in gastric carcinogenesis. Alternative promoters recognized by H3K4me are frequently identified in GC. These promoters activate the expression of tumor-specific RNA and protein isoforms. These isoforms are highly expressed in tumors. In some cases, these tumor protein isoforms can have gain of function that can promote tumor invasion (e.g., MET and RASA3). Rarely, tumor isoforms are found to be downregulated in cancer compared to control, as in the case of ARID1A, specifically in GC of the CIN subtype. Furthermore, the expression of these tumor isoforms can result in a lower relative expression of canonical peptides with high affinity to MHC I, leading to decreased antigen presentation and immune evasion. recent study employed paired-end ChIP-seq of H3K27ac to identify chromosomal rearrangements involving enhancers in GC. The authors identified rearrangements leading to enhancer hijacking, in which an enhancer from one region of the genome relocates to activate a gene not normally under its regulation. This led to the activation of oncogenes such as CCNE1 101 . These studies showed that functional elements on chromatin, as defined by histone marks, can be remodeled to cause dysregulation of gene expression in GC. The tumor microenvironment (TME) plays an important role in providing tumors with the factors necessary to facilitate their growth and progression. The chromatin landscape of TME in gastric cancer has recently been explored. Maeda et al. examined cancer-associated fibroblasts (CAFs) in GC by performing H3K27me3 ChIP-seq and found that H3K27me3 (a repressive mark) is lost in CAFs for genes associated with the gastric stem cell niche, such as WNT5A when compared to normal fibroblasts 102 . In addition, a recent study found that SAA1, a secreted protein involved in the migration of GC cells, is upregulated in CAFs compared to normal fibroblasts, which may be mediated by increased H3K27ac at its enhancers and promoter 103 . These studies provide new evidence that GC development can be promoted by epigenetic changes in the tumor microenvironment. Together, genome-wide chromatin analyses of both GC cells and the TME have provided new mechanistic insight into how oncogenic and tumor suppressor pathways are epigenetically regulated in gastric tumorigenesis. NONCODING RNAS IN GASTRIC CANCER Over the last decade, noncoding RNAs (ncRNAs) have emerged as key players in the epigenetic regulation of carcinogenesis. ncRNAs are classified into short and long noncoding RNAs (lncRNAs), both of which regulate the expression of many key signaling pathways and regulator genes. One type of extensively studied short noncoding RNAs in GC are microRNAs (miRNAs), which are RNAs less than 30 nucleotides in length that silence complementary mRNA by either mRNA cleavage or translation repression 104 . In contrast, long noncoding RNAs (lncRNAs) are more than 200 nucleotides long, and they possess regulatory roles in facilitating miRNA-mRNA-protein crosstalk 105 . Here, we discuss recent discoveries addressing the roles of miRNAs and lncRNAs in gastric tumorigenesis. miRNAs regulate the components of key oncogenic pathways in gastric cancer While expression profiling experiments previously identified differentially expressed miRNAs between GC and normal gastric cells in vitro 106 , analysis of human GC and preneoplastic lesions identified distinct miRNAs that are up-or downregulated at different stages of the disease 107 . These studies suggested that miRNAs may play key roles during each step of gastric carcinogenesis. By regulating key oncogenic pathways such as PI3K/AKT signaling, miRNAs can act as both tumor suppressors and oncogenes [108][109][110] . For example, miR137 acts as a tumor suppressor by repressing AKT, whereas miR-21, miR-221, and miR-222 promote gastric tumor cell proliferation by targeting PTEN, a negative regulator of AKT 111,112 . Additional miRNAs involved in the PI3K/ AKT pathway have been identified. miR-107 promoted GC cell proliferation and metastasis by targeting FOXO1, a transcription factor downstream of AKT signaling, and FAT4, a tumor suppressor suppressing PI3K/AKT signaling 113,114 . Other miRNAs, such as miR-575 and miR-32-5p, were found to target PTEN, either directly or indirectly 115,116 . Additionally, by targeting the components of PI3K/AKT signaling, functional studies showed that miR-20b and miR-451a have opposing roles in gastric carcinogenesis 117 . Another key oncogenic pathway regulated by miRNAs in GC is the Wnt signaling pathway, which is activated in approximately 46% of GC cases 118,119 . In a recent study, Deng et al. found significant upregulation of miR-192 and miR-215 in GC tissue samples and showed that these miRNAs activate Wnt signaling by targeting APC, a negative regulator of the pathway 120 . Another study illustrated the tumor suppressive role of miR-520f-3p in GC by targeting SOX9, an HMG-box transcription factor, and subsequently inactivating Wnt signaling 121 . These findings demonstrate that miRNAs regulate key oncogenic signaling pathways and that their expression is tightly regulated in gastric tumorigenesis. LncRNAs regulate gastric tumorigenesis by facilitating crosstalk between miRNAs and their target genes By interacting with DNA, mRNA, miRNA, and histone-modifying complexes, lncRNAs can regulate transcription, post-transcription, translation, and epigenetic modifications. They play key roles in tumorigenesis and have been shown to regulate cell proliferation, the cell cycle, apoptosis and metastasis 122 . Several lncRNA expression profile studies found aberrant expression patterns of lncRNAs in GC. A comparison of lncRNA expression profiles in gastric tumor tissues to adjacent nontumorous tissues identified two differentially expressed lncRNAs, uc001lsz and H19. As the second most downregulated lncRNA in gastric cancer, uc001lsz was proposed to have a tumor-suppressive, trans-acting effect on MUC2, which is highly expressed in GC. In contrast, the authors found that H19 was the most upregulated lncRNA in gastric tumor samples 123 . Consistent with this expression pattern, H19 is involved in multiple processes of gastric tumorigenesis. For example, Yang et al. demonstrated that H19 partially inactivates p53 in GC cells, while another study showed that the ectopic expression of H19 promotes gastric tumorigenesis by acting as a precursor of miR-675 124,125 . Other studies have also demonstrated that the H19/miR-675 axis promotes GC cell proliferation by targeting the tumor suppressor RUNX1 126,127 . In addition to encoding miR-675, H19 also interacts with other miRNAs to promote gastric tumorigenesis. For example, H19 promoted tumorigenesis via the miR-138/E2F2 axis 128,129 . Moreover, the upregulation of miR-22-3p rescued H19-induced GC cell growth 130 . HOTAIR, another lncRNA upregulated in GC, was shown to promote tumorigenesis through its interactions with miRNAs and target proteins [131][132][133] . Specifically, HOTAIR promoted proliferation by negatively regulating miR-126 and activating CXCR4/RhoA signaling 132 and enhanced gastric cancer cell invasion by inducing the ubiquitination of RUNX3, an upstream regulator of the tight junction protein CLAUDIN1 133 . Recently, HOTAIR has also been reported to promote gastric tumorigenesis by downregulating the exosomal levels of miR-30a and miR-30b, providing insight into the extracellular role of HOTAIR in GC 134 . In addition to H19 and HOTAIR, other lncRNAs identified from lncRNA profiling in GC are beginning to gain attention 135,136 . For example, PVT1 has been shown to promote gastric tumorigenesis through its interactions with FOXM1, miR-125, and miR-30a [137][138][139] . Enriched in advanced GC 140 , MALAT1 has been shown to promote GC cell proliferation by recruiting SF2/ASF, a splicing factor, downregulating miR-204, and activating the PI3K/AKT pathway [141][142][143][144] . In summary, these studies show that the expression levels of specific lncRNAs are tightly regulated in GC tumorigenesis and that they interact with both miRNAs and target proteins to promote cancer progression. INTERACTION BETWEEN NCRNAS AND HISTONE MODIFICATION IN GASTRIC CANCER The different modes of regulation in cells do not function in isolation but are interwoven in an intricate network to cooperatively regulate gene expression. As our understanding of both histone modification and ncRNAs grows, increasing interest is also focused on how their interactions control gastric tumorigenesis. Despite their distinct mechanisms of action, both miRNAs and lncRNAs influence histone modifications and are in turn regulated by these modifications in a reciprocal manner (Fig. 4). miRNAs and histone modifiers reciprocally regulate each other to promote or repress gastric tumorigenesis Recent studies have examined the interactions between miRNAs and histone writers, demonstrating that they regulate each other in a reciprocal manner to promote or suppress tumorigenesis. miR-130b-3p promoted GC proliferation by inhibiting MLL3a, an H3K4 methyltransferase, in M2 macrophages 145 , while miR-4513 stimulated both GC proliferation and EMT by targeting KAT6B 146 . In addition to histone writers, miRNAs also interact with erasers to either suppress or promote gastric tumorigenesis. For example, the regulation of KDM by miRNAs is well characterized for its tumor-suppressing effect. miR-194 was found to target KDM5B, and its overexpression suppressed GC growth both in vitro and in vivo 147 . Similarly, the upregulation of miR-29b suppressed gastric tumorigenesis by targeting KDM2A 148 . More recently, miR-329 and miR-491-5p were identified as regulators of KDM1A and KDM4B (JMJD2B) in suppressing tumorigenesis, respectively 149,150 . In contrast, miR-543 and miR-204 promoted proliferation and reversed SIRT1-induced EMT in GC cells by downregulating SIRT1, respectively 151,152 . Alternatively, chromatin modifiers are also capable of regulating miRNAs in GC. For example, SIRT7-mediated H3K18ac repressed miR-34a, leading to the inhibition of apoptosis in GC cells 153 . A recent study also showed that BRD4 represses miR-216a-3p, a negative regulator of Wnt3a, thereby promoting the stemness of GC cells through the upregulation of Wnt signaling 154 . Additionally, deletion of HDAC1 led to an upregulation of miR-34a, resulting in the suppression of miR-34a's downstream oncogene CD44, thereby inhibiting the metastasis of GC cells 155 . In contrast, deletion of LSD1 resulted in the downregulation of miR-142-5p, which led to the upregulation of its tumor-suppressing target CD9, thereby repressing gastric cancer cell migration 43 . These results highlight the existence of a complex miRNA-histone modifier network in which miRNAs and histone modifiers reciprocally regulate each other in gastric tumorigenesis. lncRNAs act as guides and/or scaffolds for histone modifiers While miRNAs typically regulate the expression of histonemodifying proteins and complexes, lncRNAs interact directly/ indirectly with writers and erasers, often recruiting them to their target genes to perform histone modifications. In particular, multiple lncRNAs interact with EZH2, a subunit of PRC2, to either attenuate or promote gastric tumorigenesis. A screen for lncRNAs in GC patients identified LINC00628 as a tumor suppressor, and LINC00628 suppressed tumorigenesis by interacting with EZH2 and guiding PCR2 to oncogenes such as CCNA2 and HOX11, leading to their methylation and silencing 156 . A more recent study suggested that lncRNAs can regulate gene expression by indirectly recruiting EZH2 through a mediator. In this study, Han et al. showed that lncRNA PART1, a lncRNA significantly downregulated in GC, upregulates PLZF via its interaction with androgen receptor (AR). PLZF then recruits EZH2 to increase H3K27 trimethylation at the PDGFB promoter, suppressing its expression and subsequently downregulating PI3K/Akt signaling 157 . Two additional lncRNAs, LINC01232 and LINC00202, were found to be upregulated in GC, and they interacted with EZH2, negatively regulating the expression of KLF2, a zinc-finger transcription factor, through histone methylation 158,159 . Another oncogenic lncRNA, HOTAIR, promoted EMT by recruiting PRC2 to catalyze H3K27me3 at the CDH1 promoter in GC 160 . Interestingly, EZH2 was recently shown to act upstream of lncRNAs. Zhu et al. demonstrated that EZH2 could methylate the promoter of a lncRNA, PCAT18, possibly contributing to its low expression in GC. The authors showed that PCAT18 could inhibit GC cell proliferation by regulating P16 expression 161 . These studies highlight the importance of ncRNAhistone writer interactions in gastric tumorigenesis. Interactions between lncRNAs and erasers, specifically demethylases such as LSD1, also play crucial roles in gastric tumorigenesis. A recent study identified the upregulation of LINC00461 in GC tissues and showed that its knockdown in GC cells decreases cell viability and proliferation. Further evidence indicated that LINC00461 mediates GC cell viability via direct interaction with LSD1 162 . In light of this finding, more recent studies have uncovered the downstream mechanisms of lncRNA interactions with LSD1. For example, Lian et al. showed that LINC01446 promotes GC cell proliferation and metastasis by epigenetically silencing RASD1, a member of the Ras family of monomeric G proteins, through the recruitment of LSD1 to the RASD1 promoter 163 . In contrast, Pan et al. found that the binding of LINC00675 to LSD1 suppresses the transcription of SPRY4, an upstream effector of RAS, thereby reducing GC cell proliferation and migration. Interestingly, the overexpression of LINC00675 led to a stronger binding capacity of LSD1 to H3K4me2 and decreased H3K4me2 levels at the SPRY4 promoter, suggesting that LINC00675 regulates SPRY4 expression through LSD1 in GC 164 . Interestingly, some lncRNAs can interact with both writer and eraser, highlighting their complex mechanistic roles in gastric tumorigenesis. For example, LINC00673 promoted GC cell proliferation by repressing key tumor suppressors, such as KLF2 and LATS2, through its interaction with EZH2 and LSD1. Specifically, LINC00673 facilitated EZH2 and LSD1 binding to the promoters of KLF2 and LATS2 to induce H3K27 trimethylation and H3K4 demethylation, respectively 165 . Another study found that FOXD2-AS1 promotes GC progression by recruiting both EZH2 and LSD1 to repress EPHB3 expression 166 . These results provide compelling evidence that lncRNAs can interact with multiple histone modifiers to regulate gastric tumorigenesis. Recent studies have shown that lncRNAs are also able to act as scaffolds for multiple histone modifiers. For example, GClnc1 interacted with both WDR5 and KAT2A. GClnc1 promoted gastric tumorigenesis by acting as a scaffold for WDR5/KAT2A and recruiting them to the promoter of SOD2, a potential oncogene, which subsequently facilitated its expression by increasing H3K4me3 167 . Another study showed that GCAWKR recruits WDR5/KAT2A to upregulate the oncogene PTP4A1 168 . In addition to GClnc1 and GCAWKR, HOXA11-AS can also act as a scaffold for both histone modifiers and DNA methyltransferases, which include EZH2, LSD1, and DNMT1. Specifically, HOXA11-AS interacts with DNMT1/EZH2 to repress KLF2 expression through H3K27me3, while its interaction with LSD1/EZH2 represses PRSS8 expression through H3K27me3 and H3K4 demethylation 169 . Finally, new studies have begun to explore lncRNA interactions with readers. For example, lncRNA NEAT1 directly interacted with BRG1, a subunit of the SWI/SNF complex, which resulted in the silencing of the G2/M checkpoint protein GADD45A and subsequently promoted GC cell proliferation 170 . In summary, these studies provide valuable insight into the diverse roles of lncRNAhistone modifier interactions, revealing a complex epigenetic network involved in the regulation of gastric tumorigenesis (Fig. 4). CONCLUDING REMARKS The three main types of epigenetic regulation include DNA methylation, histone modification, and ncRNAs. While changes in DNA methylation remain one of the major mechanisms for gastric carcinogenesis, recent studies have shown that histone modifications and ncRNA also play critical roles. In this review, we discuss recent findings addressing the mechanism of histone modification and ncRNAs in GC. Specific writer and eraser proteins control histone modification and interact with readers such as chromatin remodeling complexes. Their dysregulation in GC leads to altered expression levels of downstream oncogenes and tumor suppressor genes. In addition to histone modification, ncRNAs regulate many key signaling pathways and genes involved in the development of GC. Importantly, these histone modifications and ncRNAs also interact with and regulate each other. While somatic mutations of oncogenes and tumor suppressors are thought to be the major cause of cancer, accumulating evidence has increasingly supported the importance of epigenetic regulation in carcinogenesis. Since certain environmental factors are known to increase the risk of GC, epigenetic regulation would likely serve as a critical link between the environment and gene expression. Of note, the stomach is constantly exposed to environmental factors. Infection with two gastric pathogens, Epstein-Barr virus (EBV) and H. pylori, is known to increase the risk of GC 11,13 . EBV in particular is known to cause widespread DNA methylation in GC, suggesting that repression of tumor suppressors by promoter hypermethylation is a core mechanism in EBVmediated carcinogenesis 15 . Notably, a few recent studies have begun to explore the global changes in the chromatin landscape caused by EBV infection. Hi-C analysis revealed physical interactions between the EBV genome and the host genome, demonstrating EBV-mediated global changes in histone modification 171 . Similar to EBV-mediated DNA methylation, H. pylori can influence the expression of key regulators through altered DNA methylation 172 . It is also known to cause dysregulation of writers, readers, miRNAs, and ncRNAs, highlighting its potential in modifying the epigenetic landscape globally [173][174][175][176][177][178][179][180] . However, future studies are required to further define the mechanistic interactions between these environmental factors and epigenetic regulators. New technologies continue to facilitate discoveries that will improve our understanding and treatment of GC. For example, combinatorial drug therapies prove to be a promising personalized medicine approach with the potential to reduce off-target toxicity 181 . In recent years, in vitro studies combining epigenetic inhibitors with other anticancer compounds have shown synergistic effects in tumor suppression for GC 75,[182][183][184] . However, clinical trials investigating combinatorial treatment involving epigenetic inhibitors in GC have been limited 185 . In addition, novel single-cell technologies such as single-cell ATAC-seq allows us to probe the role of chromatin architecture and ncRNAs in tumor heterogeneity, which poses another major challenge in cancer treatment. Therefore, efforts to elucidate the epigenetic mechanism underlying gastric carcinogenesis will continue to improve the treatment and prevention of GC.
2023-01-19T20:37:57.958Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "9641c03c8a3a6947801ca1decadba480cd396c43", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s12276-023-00926-0.pdf", "oa_status": "GOLD", "pdf_src": "Springer", "pdf_hash": "30d30501aa4aa66bc7c4c98c2998dfa6a40bab60", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
232224410
pes2o/s2orc
v3-fos-license
Information flows from hippocampus to auditory cortex during replay of verbal working memory items The maintenance of items in working memory relies on a widespread network of cortical areas and hippocampus where synchronization between electrophysiological recordings reflects functional coupling. We investigated the direction of information flow between auditory cortex and hippocampus while participants heard and then mentally replayed strings of letters in working memory by activating their phonological loop. We recorded LFP from the hippocampus, reconstructed beamforming sources of scalp EEG, and - additionally in 3 participants – recorded from subdural cortical electrodes. When analyzing Granger causality, the information flow was from auditory cortex to hippocampus with a peak in the 4-8 Hz range while participants heard the letters. This flow was subsequently reversed during maintenance while participants maintained the letters in memory. The functional interaction between hippocampus and the cortex and the reversal of information flow provide a physiological basis for the encoding of memory items and their active replay during maintenance. 22 The maintenance of items in working memory relies on a widespread network 23 of cortical areas and hippocampus where synchronization between 24 electrophysiological recordings reflects functional coupling. 25 We investigated the direction of information flow between auditory cortex and 26 hippocampus while participants heard and then mentally replayed strings of letters in 27 working memory by activating their phonological loop. We recorded LFP from the 28 hippocampus, reconstructed beamforming sources of scalp EEG, and -additionally in 29 3 participants -recorded from subdural cortical electrodes. When analyzing Granger 30 causality, the information flow was from auditory cortex to hippocampus with a peak 31 in the 4-8 Hz range while participants heard the letters. This flow was subsequently 32 reversed during maintenance while participants maintained the letters in memory. 33 The functional interaction between hippocampus and the cortex and the reversal of 34 information flow provide a physiological basis for the encoding of memory items and 35 their active replay during maintenance. 36 1 Introduction 37 Working memory (WM) describes our capacity to represent sensory input for 38 prospective use [1,2]. Maintaining content in WM requires communication within a 39 widespread network of brain regions. The anatomical basis of WM was shown 40 noninvasively with EEG / MEG [3][4][5][6][7][8][9][10] and invasively with intracranial local field 41 potentials (LFP) [11][12][13][14][15][16][17][18][19][20][21] and single unit recordings [19,[21][22][23][24]. 42 In cortical brain regions, WM maintenance correlates with sustained neuronal 43 oscillations, most frequently reported in the theta-alpha range (4-12 Hz) [3][4][5][6][7][9][10][11][12][13][14][15][16][17][18][19][20] or 44 at even lower frequencies [25,26]. Also in the hippocampus, WM maintenance was 45 associated with sustained theta-alpha oscillations [15,19]. As a hallmark for WM 46 maintenance, persistent neuronal firing was reported during the absence of sensory 47 2 Results 66 2.1 Task and behavior 67 Fifteen participants (median age 29 y, range , 7 male, Table 1) 68 performed a modified Sternberg WM task (71 sessions in total, 50 trials each). In the 69 task, items were presented all at once rather than sequentially, thus separating the 70 encoding period from the maintenance period. In each trial, the participant was 71 instructed to memorize a set of 4, 6 or 8 letters presented for 2 s (encoding). The 72 number of letters was thus specific for the memory workload. The participants read 73 the letters themselves and heard them spoken at the same time. Since participants 74 had difficulties reading 8 letters within the 2 s encoding period, also hearing the 75 letters assured their good performance. After a delay (maintenance) period of 3 s, a 76 probe letter prompted the participant to retrieve their memory (retrieval) and to 77 indicate by button press ("IN" or "OUT") whether or not the probe letter was a 78 member of the letter set held in memory (Fig. 1a). During the maintenance period, 79 participants rehearsed the verbal representation of the letter strings subvocally, i.e. 80 mentally replayed the memory items. Participants had been instructed to employ this 81 strategy and they confirmed after the sessions that they had indeed employed this 82 strategy. This activation of the phonological loop [1] is a component of verbal WM as 83 it serves to produce an appropriate behavioral response [2]. 84 The mean correct response rate was 91% (both for IN and OUT trials). The rate of 85 correct responses decreased with set size from a set size of 4 (97% correct 86 responses) to set sizes of 6 (89%) and 8 (83%) (Fig. 1 b). Across the participants, 87 the memory capacity averaged 6.1 (Cowan's K, (correct IN rate + correct OUT rate -88 1)*set size), which indicates that the participants were able to maintain at least 6 89 letters in memory. The mean response time (RT) for correct trials (3045 trials) was 90 1.1 ± 0.5 s and increased with workload from set size 4 (1.1 ± 0.5 s) to 6 (1.2 ± 0.5 s) 91 and 8 (1.3 ± 0.6 s), 53 ms/item (Fig. 1 c). Correct IN/OUT decisions were made more 92 rapidly than incorrect decisions (1.1 ± 0.5 versus 1.3 ± 0.6 seconds). These data 93 show that the participants performed well in the task and that the difficulty of the trials 94 increased with the number of letters in the set. In further analysis, we focused on 95 correct trials with set size 6 and 8 letters to assure hippocampal activation and 96 hippocampo-cortical interaction as shown earlier [19]. 97 98 To investigate how cortical and hippocampal activity subserves WM 99 processing, we analyzed the LFP recorded in the hippocampus (Fig. 1 d) together 100 with ECoG from cortical strip electrodes (Fig. 2 a, Fig. 3 a, f). In the following, we 101 present power spectral density (PSD) time-frequency maps from representative 102 electrode contacts. In an occipital recording of Participant 1 (grid contact H3), strong 103 gamma activity (> 40 Hz) in the relative power spectral density (PSD) occurred while 104 the participant viewed the letters during encoding (increase >100 % with respect to 105 fixation, Fig. 2 b). Similarly, encoding elicited gamma activity in a temporal recording 106 over auditory cortex (increase >100%, grid contact C2, Fig. 2 c), similar as in [25]. 107 Gamma increased significantly only in temporal and occipital-parietal contacts 108 (permutation test with z-score > 1.96, Fig. 2 a). 109 After the letters disappeared from the screen, activity occurred in the low beta 110 range (11-14 Hz, Fig. 2 b) towards the end of the maintenance period in temporal 111 and occipital contacts (permutation test p < 0.05, Fig. 2 d). Similarly, the temporal 112 scalp EEG of Participant 2 (black rimmed disk denotes electrode site T3 in Fig 3 a) 113 showed activity during encoding and maintenance, albeit at lower frequencies (Fig 3 114 b); this pattern was found only in scalp EEG and not in ECoG, probably because the 115 strip electrode was not located over auditory cortex. In Participant 3, a similar pattern 116 occurred in the PSD of a temporo-parietal recording (most posterior strip electrode 117 contact, Fig 3 f), where the appearance of the probe letter again prompted gamma 118 activity (Fig 3 g). This site coincides with the generator of scalp EEG that was found 119 in the parietal cortex for the same task [3]. The PSD thereby confirmed the findings of 120 local synchronization of cortical activity during WM maintenance [3,8,9]. 121 In the hippocampus of all three participants, we found elevated activity in the 122 beta range (12-24 Hz) towards the end of the maintenance period (increase >100%, 123 Fig 2 e, Fig 3 c, h) 136 What was the directionality of the information flow during encoding and 137 maintenance in a trial? We used spectral Granger causality (GC) as a measure of 138 directed functional connectivity to determine the direction of the information flow 139 between auditory cortex and hippocampus in Participant 1 during the trials. To 140 improve legibility, we present GC as Granger (%) = GC*100. During encoding, the 141 information flow was from auditory cortex to hippocampus with a maximum in the 142 theta frequency range (dark blue curve in Fig. 2 h). The net information flow 143 ΔGranger (GC hippcortex -GC cortexhipp) during encoding was significant in 144 the 6-8 Hz range (blue bar in Fig. 2 h, p<0.05 permutation test against a null 145 distribution). During maintenance, the information flow in the theta frequency range 146 was reversed (dark red curve), i.e. from hippocampus to auditory cortex (dark red 147 curve in Fig. 2 h). The net information flow ΔGranger during maintenance was 148 significant in the 5-8 Hz range (red bar in Fig. 2 h, p<0.05 permutation test against a 149 null distribution). Concerning the spatial spread of the theta GC, the maximal net 150 information flow ΔGranger (GC hippcortex -GC cortexhipp) during encoding 151 occurred from auditory cortex to hippocampus (p<0.05, permutation test, Fig. 2 i). 152 During maintenance, the theta ΔGranger was significant from hippocampus to both 153 auditory cortex and occipital cortex (permutation test p<0.05, Fig. 2j). Interestingly, in 154 Participant 1, the distribution of high ΔGranger coincides with the distribution of high 155 PLV: both show a spatial maximum to grid contacts over auditory cortex and both 156 appear in the theta frequency range. 157 We next tested the statistical significance of the spatial spread of contacts with 158 high ΔGranger (4-8 Hz) during maintenance ([-2 0] s). To provide a sound statistical 159 basis, we tested the spatial distribution of GC on the grid contacts against a null 160 distribution. The activation on grid contacts was reshaped into a grid vector. The 161 spatial collinearity of two grid vectors was captured by their scalar product. We next 162 performed 200 iterations of random trial permutations. For each iteration we selected 163 two subsets of trials and we calculated the scalar product between the two vectors 164 corresponding to these subsets. We then tested the statistical significance of the 165 scalar product (Fig. 2 k). The true distribution (red) is clearly distinct from the null 166 distribution (gray, blue bar marks the 95th percentile). The analogous procedure was 167 performed for PSD (Fig. 2 a, d), PLV (Fig. 2 g) and GC during encoding ( Fig. 2 i), 168 which gave equally significant results in all cases. 169 As a further illustration of the ΔGranger time-course, the time-frequency plot 170 ( Fig. 2 l) shows the difference between GC spectra (GC hipp cortex -GC cortex 171 hipp) at each time point, where blue indicates net flow from auditory cortex to 172 hippocampus and red indicates net flow from hippocampus to auditory cortex. 173 Similarly in Participant 2, the time course of GC followed the same pattern 174 between auditory cortex (anterior strip electrode contact in Fig. 3 a) and 175 hippocampus ( Fig. 3 d,e). Among the three participants that had both LFP and 176 temporo-parietal ECoG recordings, Participant 3 had an electrode contact over visual 177 cortex; the sensory localization was indexed by the strong gamma activity in the most 178 posterior contact of the strip electrode ( Fig. 3 g). The time-course of information flow 179 between visual cortex and hippocampus ( Fig. 3 i,j) followed the same pattern as 180 described for the auditory cortex above. Thus, letters were encoded with information 181 flow from sensory cortex to hippocampus; conversely, the information flow from 182 hippocampus to sensory cortex indicated the replay of letters during maintenance. 183 184 We used beamforming [30] to reconstruct the EEG sources during encoding 185 and maintenance for each of the 15 participants (Table 1). We tested whether the 186 sources during fixation differed from sources during encoding and during 187 maintenance (non-parametric cluster based permutation t-test [31,32]). In each 188 participant, the proportion of significant sources in the left hemisphere exceeded 80% 189 of all significant sources. Across all participants, the spatial activity pattern during 190 both encoding and maintenance showed the highest significance in frontal and 191 temporal areas of the left hemisphere (Fig S1). 192 194 The synchronization between hippocampal LFP and EEG sources (N = 15 195 participants) confirmed the directed functional coupling found in the three participants 196 with ECoG. We first calculated the GC between hippocampus and the EEG 197 beamforming sources in the auditory cortex. We found that the mean GC spectra 198 resembled the GC spectrum for ECoG in the theta frequency range ([4 8] Hz, Fig. 199 4a). During encoding, the net information flow was from auditory cortex to 200 hippocampus (light blue curve -dark blue curve, blue bar, group cluster-based 201 permutation test). During maintenance, the net information flow was reversed (dark 202 red curve -light red curve, red bar, group cluster-based permutation test), i.e. from 203 hippocampus to auditory cortex. Thus, both for ECoG and EEG sources, GC showed 204 the same bidirectional effect in theta between auditory cortex and hippocampus. 205 To explore the spatial distribution, we computed GC also for other areas of 206 cortex. We averaged the net information flow (ΔGranger) in the theta range across 207 the participants and projected it onto the inflated brain surface (Fig 4b, c). During 208 encoding, the mean information flow was strongest from auditory cortex to 209 hippocampus (ΔGranger = -4.9 %, p = 0.0009, Kruskal-Wallis test, Fig 4b). For all 210 other areas, the mean ΔGranger was also from cortex to hippocampus but the effect 211 was weaker (mean ΔGranger = [-3 0]%, Dunn's test, Bonferroni corrected). During 212 maintenance (Fig 4c) the information flow was reversed. While all areas had 213 information flow from hippocampus to cortex (ΔGranger = [0 2]%, Dunn's test, 214 Bonferroni corrected), the strongest flow appeared from hippocampus to auditory 215 cortex (ΔGranger = 3.4%, p = 0.001, Kruskal-Wallis test). 216 217 The reversal of ΔGranger appeared in all 15 participants individually (Fig 4d). 218 We averaged ΔGranger for each participant in the [4-8] Hz theta frequency range. 219 The ΔGranger between hippocampus and auditory cortex, was negative during 220 encoding and was positive during maintenance in the (p = 4.1e-10, paired 221 permutation test). The directionality and its reversal was missing for all other areas, 222 e.g. lateral prefrontal cortex (p = 0.16, paired permutation test, Fig 4e). Of note, all 223 analyses up to here were performed on correct trials only. 224 Finally, we established a link between the participants' performance and 225 ΔGranger. For incorrect trials, the net information flow ΔGranger from auditory cortex 226 to hippocampus did not show the same directionality in all participants and did not 227 reverse in direction (p = 0.37, paired permutation test, Fig. 4f). Since participants 228 performed well (median performance 91%), we balanced the numbers of correct and 229 incorrect trials. We calculated the GC in a subset of correct trials (median of 200 230 permutations of a number of correct trials that equals the mean percentage of 231 incorrect trials = 10%); the effect was equally present for the subset of correct trials 232 Fig 4d). This suggests that timely information flow, as indexed by GC, is 233 relevant for producing a correct response. 234 235 Working memory (WM) describes our capacity to represent sensory input for 236 prospective use. Our findings suggest that this cognitive function is subserved by 237 bidirectional oscillatory interactions between the hippocampus and the auditory 238 cortex as indicated by phase synchrony and Granger causality. In our verbal working 239 memory task, the encoding of letter items is isolated from the maintenance period in 240 which the active rehearsal of memory items is central to achieve correct 241 performance. First, analysis of task-induced power showed sustained oscillatory 242 activity in cortical and hippocampal sites during the maintenance period. Second, 243 analysis of the inter-electrode phase synchrony and the directional information flow 244 showed task-induced interactions in the theta band between cortical and 245 hippocampal sites. Third, the directional information flow was from auditory cortex to 246 hippocampus during encoding and, during maintenance, the reverse flow occurred 247 from hippocampus to auditory cortex. This pattern was dominant on the left cortical 248 hemisphere, as expected for a language related task. Fourth, the comparison 249 between correct and incorrect trials suggests that the participants relied on timely 250 information flow to produce a correct response. Our data suggests a surprisingly 251 simple model of information flow within a network that involves sensory cortices and 252 hippocampus ( Fig. 4 g): During encoding, letter strings are verbalized as melody. 253 The incoming information flows from sensory cortex to hippocampus (bottom-up). 254 During maintenance, participants actively recall and rehearse the melody in their 255 phonological loop [1,2]. The Granger causality indicates the information flow from 256 hippocampus to cortex (top-town) as the physiological basis for the replay of the 257 memory items, which finally guides action. 258 The current study is embedded in previous studies using the same or similar 259 tasks. Persistent firing of hippocampal neurons indicated hippocampal involvement in 260 the maintenance of memory items [19,22,23]. An fMRI study reports salient activity 261 in the auditory cortex during maintenance in an auditory working memory task [33], 262 which indicates that sensory cortical areas are involved in the maintenance of WM 263 items. During encoding, the activity of local assemblies was associated with gamma 264 frequencies and local processing (Fig. 2 a b c, Fig. 3 g) while GC inter-areal 265 interactions took place in theta frequencies, in line with previous reports [29,34]. 266 Parietal generators of theta-alpha EEG indicated involvement of parietal cortex in 267 WM maintenance [3,5,7,19,35]. The hippocampo-cortical phase synchrony (PLV) 268 was high during maintenance of the high workload trials [19]. Building on these 269 previous studies, the current study focused on high workload trials and extended 270 them by the analysis of directional information flow. 271 In the design of the task we aimed to separate in time the encoding of memory 272 items from their maintenance. In the choice of the 2s duration for the encoding period 273 were guided by the magic number 7±2, which may correspond to "how many items 274 we can utter in 2 seconds" [1, 2]. The median Cowan's k = 6.1 shows that high-275 workload trials were indeed demanding for the participants, where both encoding and 276 maintenance may limit performance. We therefore presented the letters both as a 277 visual and an auditory stimulus. Certainly, maintenance processes are likely to 278 appear already during the encoding period as maintenance neurons ramp up their 279 activity already during encoding [19]. Furthermore, encoding may extend past the 280 visual stimulus (t = -3 s). We therefore focused our analysis on the last two seconds 281 of maintenance [-2 0] s. With this task design, we found patterns of GC that were 282 clearly distinct between encoding and maintenance. 283 The interaction between recordings from different brain regions has to be 284 discussed with respect to volume conduction [36]. Of note, there was a strong 285 frequency dependence of GC from hippocampus to ECoG (Fig. 2 h, Fig. 3 d, i). 286 Likewise, GC to EEG sources showed a strong frequency dependence (Fig. 4 a). 287 This speaks against volume conduction because the transfer of signal through tissue 288 by volume conduction is independent of frequency in the range of interest here [37, 289 38]. Furthermore, there was a strong task dependence of GC (Fig. 2 h, Fig. 3 d, i, 290 Fig. 4 a) We used a modified Sternberg task in which the encoding of memory items 310 and their maintenance were temporally separated (Fig. 1a). Each trial started with a 311 fixation period ([−6, −5] s), followed by the stimulus ([−5, −3] s). The stimulus 312 consisted of a set of eight consonants at the center of the screen. The middle four, 313 six, or eight letters were the memory items, which determined the set size for the trial 314 (4, 6, or 8 respectively). The outer positions were filled with "X," which was never a 315 memory item. The participants read the letters and heard them spoken at the same 316 time. After the stimulus, the letters disappeared from the screen, and the 317 maintenance interval started ([−3, 0] s). Since the auditory encoding may have 318 extended beyond the 2 s period, we restrict our analysis to the last 2 s of the 319 maintenance period ([−2, 0] s). A fixation square was shown throughout fixation, 320 encoding, and maintenance. After maintenance, a probe was presented. The 321 participants responded with a button press to indicate whether the probe was part of 322 the stimulus. The participants were instructed to respond as rapidly as possible 323 without making errors. After the response, the probe was turned off, and the 324 participants received acoustic feedback regarding whether the response was correct 325 or incorrect. The participants performed sessions of 50 trials in total, which lasted 326 approximately 10 min each. Trials with different set sizes were presented in a 327 random order, with the single exception that a trial with an incorrect response was 328 always followed by a trial with a set size of 4. The task can be downloaded at 329 www.neurobs.com/ex_files/expt_view?id=266. 330 331 The participants in the study were patients with drug resistant focal epilepsy. 332 To investigate a potential surgical treatment of epilepsy, the patients were implanted 333 with intracranial electrodes. The participants provided written informed consent for 334 the study, which was approved by the institutional ethics review board (PB 2016-335 02055). The participants were right-handed and had normal or corrected-to-normal 336 vision. For nine participants (4 -13), the PSD and PLV has been reported in an 337 earlier study [19]. 338 339 The depth electrodes (1.3 mm diameter, 8 contacts of 1.6 mm length, spacing 340 between contact centers 5 mm, ADTech®, Racine, WI, www.adtechmedical.com) 341 were stereotactically implanted into the hippocampus. Subdural grids and strips were 342 placed directly on the cortex according to the findings of the non-invasive presurgical 343 evaluations. Platinum electrodes with 4 mm 2 contact surface and 1 cm inter-electrode 344 distances were used (ADTech®). In addition, scalp EEG electrodes were placed at 345 the sites of the 10-20 system with minor adaptations to avoid surgical scalp lesions. 346 347 To localize the ECoG grids and strips, we used the participants' postoperative 348 MR, aligned to CT and produced a 3D reconstruction of the participants' pial brain 349 surface. Grid and strip electrode coordinates were projected on the pial surface as 350 described in [41] (Fig. 2a, Fig. 3a,f). 351 The stereotactic depth electrodes were localized using post-implantation 352 computed tomography (CT) and post-implantation structural T1-weighted MRI 353 scans. The CT scan was registered to the post-implantation scan as implemented in 354 FieldTrip [42]. A fused image of CT and MRI scans was produced and the electrode 355 contacts were marked visually. The hippocampal contact positions were projected on 356 a parasagittal plane of MRI (Fig. 1b). 357 Some of the electrodes contacts were found in tissue that was deemed to be 358 epileptogenic and that was later resected. Still, neurons in this tissue have been 359 found to participate in task performance in an earlier study [19]. 360 361 All recordings were performed with Neuralynx ATLAS, sampling rate 4000 Hz, 362 0.5-1000 Hz passband (Neuralynx, Bozeman MT, USA, www.neuralynx.com). ECoG 363 and LFP were recorded against a common intracranial reference. Signals were 364 analyzed in Matlab (Mathworks, Natick MA, USA). We re-referenced the 365 hippocampal LFP against the signal of a depth electrode contact in white matter. We 366 re-referenced the cortical ECoG against a different depth electrode contact. The 367 choice of two separate references for LFP and ECoG has been shown to avoid 368 spurious functional connectivity estimates [39]. The scalp EEG was recorded against 369 an electrode near the vertex and was then re-referenced to the averaged mastoid 370 channels. All signals were downsampled to 500 Hz. All recordings were done at least 371 6 h from a seizure. Trials with large unitary artefacts in the scalp EEG were rejected. 372 We focused on trials with high workload (set sizes 6 and 8) for further analysis. We 373 used the FieldTrip toolbox for data processing and analysis [30]. 374 375 We first calculated the relative power spectral density (PSD) in the time-376 frequency domain (Fig. 2 b). Time-frequency maps for all trials were averaged. We where PLVi,j is the PLV between channels i,j, N is the number of trials, X(f) is the 389 Fourier transform of x(t), and (•)* represents the complex conjugate. 390 Using the spectra of the two-second epochs, phase differences were calculated for 391 each electrode pair (i,j) to quantify the inter-electrode phase coupling. The phase 392 difference between the two signals indexes the coherence between each electrode 393 pair and is expressed as the PLV. The PLV ranges between 0 and 1, with values 394 approaching 1 if the two signals show a constant phase relationship over all trials. 395 In our description of EEG frequency bands, we used theta (4-8 Hz), alpha (8-396 12 Hz), beta (12-24 Hz) and gamma (> 40 Hz), while the exact frequencies may differ 397 in individual participants. 398 399 400 We reconstructed the scalp EEG sources using linearly constrained minimum 401 variance (LCMV) beamformers in the time domain. To solve the forward problem we 402 used a precomputed head model template and aligned the EEG electrodes of each 403 participant to the scalp compartment of the model. We then computed the source grid 404 model and the leadfield matrix wherein we determined the grid locations according to 405 the brain parcels of the automated anatomical atlas (AAL) [43]. We solved the 406 inverse problem by scanning the grid locations using the LCMV filters separately for 407 encoding and maintenance. The EEG sources were baselined with respect to the 408 fixation period and presented as a percent of change from the pre-stimulus baseline. 409 We defined cortical areas from multiple parcels since AAL is a parcellation based on 410 sulci and gyri. We performed all the steps of the source reconstruction with FieldTrip 411 [30] and projected the sources onto an inflated brain surface. 412 413 In order to evaluate the direction of information flow between the hippocampus 414 and the cortex, we calculated spectral non-parametric Granger causality (GC) as a 415 measure of directed functional connectivity analysis [30]. We evaluated the direction 416 of information flow in the [4 20] Hz frequency range. To compute the GC we first 417 downsampled the signals to the Nyquist frequency = 40 Hz. We then computed the 418 GC between hippocampal contacts and ECoG grid contacts. We also computed GC 419 between the same hippocampal contacts and EEG sources located over the regions 420 of interest. GC examines if the activity on one channel can forecast activity in the 421 target channel. In the spectral domain, GC measures the fraction of the total power 422 that is contributed by the source to the target. We transformed signals to the 423 frequency domain using the multitaper frequency transformation method (2 Hann 424 tapers, frequency range 4 to 20 Hz with 20 seconds padding) to reduce spectral 425 leakage and control the frequency smoothing. 426 We used a non-parametric spectral approach to measure the interaction in the 427 channel pairs at a given interval time [44]. In this approach, the spectral transfer 428 matrix is obtained from the Fourier transform of the data. We used the FieldTrip 429 toolbox to factorize the transfer function H(f) and the noise covariance matrix Σ. The where S_(xx(f)) is the total power and S _(xx(f)) the instantaneous power. To 436 improve legibility, we present GC as Granger % = GC*100. To average over the 437 group of participants, we calculated the Granger spectra for the selected channel 438 pairs and averaged these spectra over participants (Fig 4a). 439 To illustrate the time course of GC over time, we calculated time-frequency 440 maps with the multitaper convolution method of Fieldtrip [30]. 441 442 To analyze statistical significance, we used cluster-based nonparametric 443 permutation tests. To assess the significance of the difference of the Granger 444 between different directions, we compared the difference of the true values to a null 445 distribution of differences. We recomputed GC after switching directions randomly 446 across trials, while keeping the trial numbers for both channels constant. Then we 447 computed the difference of GC for the two conditions. We repeated this n = 200 times 448 to create a null distribution of differences. The null distribution was exploited to 449 calculate the percentile threshold p = 0.05. In this way, we compare the difference of 450 the dark and light spectra against a null distribution of differences. We mark the 451 frequency range of significant GC with a blue bar for encoding (dark blue spectrum 452 exceeds light blue spectrum, information flow from cortex to hippocampus) and with a 453 red bar for maintenance (dark red spectrum exceeds light red spectrum, information 454 flow from hippocampus to cortex). 455 To test the statistical significance of the spatial spread of contacts with high 456 PSD, PLV, or ΔGranger, we calculated the spatial collinearity on the grid contacts 457 against a null distribution. First, we transform the activation on grid contacts into a 458 grid vector. We then performed 200 iterations of random trial permutations. For each 459 iteration we selected two subsets (50%) of trials and we calculated the scalar product 460 between the vectors corresponding to the two subsets. The null distribution was 461 created by randomly mixing trials from the two task periods fixation and encoding. 462 We finally tested the statistical significance of the scalar product. The true distribution 463 was established to be statistically distinct from the null distribution if it exceeded the 464 95th percentile of the null distribution. 465 We assess if the reconstructed EEG sources during encoding and 466 maintenance are significantly different from the pre-stimulus baseline (fixation). We 467 use the FieldTrip's method ft_sourcestatistics [30] wherein we apply a non-468 parametric permutation approach to quantify the spatial activation pattern during the 469 encoding of the memory items and their active replay. Competing interests 491 All authors declare that they have no competing interests. 492 493 The participants provided written informed consent for the study, which was 494 approved upfront by the institutional ethics review board (PB 2016-02055). 495 496 All data needed to evaluate the conclusions in the paper are present in the paper. All 497 codes used to produce the results in the paper can be requested from the authors. 498 The task can be downloaded at www.neurobs.com/ex_files/expt_view?id=266. Part 499 of the data has been published earlier [35]. Additional data and code are indexed in 500 www.hfozuri.ch. 501 size (4, 6 or 8 letters) determines WM workload. In each trial, presentation of a 656 letter string (encoding period, 2 s) is followed by a delay (maintenance period, 3 s). 657 Author contributions After the delay, a probe letter is presented. Participants indicate whether the probe 658 was in the letter string or not. during, fixation (black) encoding (blue) and maintenance (red). The PLV spectra 679 show a broad frequency distribution. The PLV during maintenance is higher than 680 during fixation. Red bars: frequency ranges of significant PLV difference (p<0.05, 681 cluster-based nonparametric permutation test against a null distribution with 682 scrambled trials during fixation and maintenance). 683 g) Phase locking value (PLV) between hippocampus and cortex in theta (4-8 Hz) 684 during maintenance ([-2 0] s) is highest to contacts over auditory cortex. 685 b) The relative power spectral density (PSD) in the temporal scalp EEG electrode 724
2021-03-15T13:22:21.469Z
2022-03-28T00:00:00.000
{ "year": 2022, "sha1": "e83e57c05fcc3ce16655ab16a02662810479232b", "oa_license": "CCBY", "oa_url": "https://doi.org/10.7554/elife.78677", "oa_status": "GREEN", "pdf_src": "BioRxiv", "pdf_hash": "e83e57c05fcc3ce16655ab16a02662810479232b", "s2fieldsofstudy": [ "Biology", "Psychology" ], "extfieldsofstudy": [ "Biology" ] }
219475592
pes2o/s2orc
v3-fos-license
The Analysis of Effects of Operating Leverage, Financial Leverage, and Liquidity on Profitability in the Telecommunications Industry Listed in Indonesia Stock Exchange The creativity and innovation of domestic telecommunications operators is still very slow and lagging behind the creativity and innovation developed abroad. This has an impact on profitability measured by ROA on five telecommunications companies listed in the IDX. Often companies need additional funds in expanding their business units. Therefore, one of the ways in which companies fulfill their funding needs is to use operating leverage and financial leverage. In this case the company must also pay attention to company liquidity, which is the company's ability to pay all short-term financial obligations at maturity. The purpose of this study is to analyze the effect of operating leverage (DOL), financial leverage (DFL) and liquidity (CR) on profitability (ROA) in the telecommunications industry listed in the IDX. The research method used is descriptive verification analysis with explanatory survey approach and hypothesis testing using multiple regression analysis with panel data. The sampling technique uses purposive sampling to obtain five companies of research samples. Research period for 20122018. The results of this study are operating leverage and financial leverage not significant positive effect on profitability, liquidity has a significant negative effect on profitability, and simultaneously operating leverage, financial leverage, and liquidity have an effect on profitability. Suggestions that can be given in accordance with the results of the study are for Telecommunications Companies need to perform liquidity risk management which means maintaining sufficient cash balances in the effort to fulfill financial I. INTRODUCTION The rapid growth of the telecommunications industry has a tremendous impact on the growth of the Indonesian economy because the telecommunications industry has become a driving force for the entire sector, starting from the telecommunications industry itself to encouraging other sectors such as the trade, manufacturing and small and medium business sectors to drive the people's economy. The use of metamorphosed cellular services looks for forms and experiences a shift along with technological developments. The era of telecommunications in Indonesia is now entering a period of degradation. Where the creativity and innovation of our telecommunications operators are still very slow and lagging behind what is offered outside. So that some companies have started to get distracted and entered in red ocean conditions. Acquisitions and mergers in several telecommunications companies began to appear. The purpose of the company is to be able to maximize the value of the company or shareholders' wealth and maximize profits. The company in its business activities always strives to achieve optimal profits and with this, the company can maintain its survival. For companies, in general, the problem of profitability is more important than the problem of profit, because even large profits are not a measure that the company has been able to work efficiently. The new efficiency can be known by comparing the profits, or in other words calculating the level of profitability. For more details about profitability, provides an understanding that the profitability of companies shows the comparison between profits with assets or capital that generates profits [1]. While the others argues that profitability or also called profitability illustrates the ability of companies to make profits through all capabilities, and existing sources such as sales, cash, capital, number of employees, number of branches and so on [2]. And the use of profitability is to measure how much profit can be obtained by the company [3]. From the three definitions, it can be concluded that profitability is the achievement achieved by the company in a certain period that is obtained by using all the capabilities of either the company's capital or assets. Every company needs funds to support the company's activities. Funds can be obtained from internal companies or from outside parties. But in managing company business units, companies often need additional funds to expand their business units. Therefore, one of the ways in which a company fulfills funding needs is financial leverage and operating leverage. Operating leverage relates to the relationship between sales results and the level of income before interest and tax payments (the firm's sales revenue to its earnings before interest and taxes). Operating leverage arises because of the fixed operating costs used in the company to generate income. Fixed operating costs do not change with changes in sales volume. Operating leverage measures changes in income or sales to operating profits. Financial leverage is the second step in the process of increasing profits. In the first step operating profit enlarges the effect of sales changes on changes in operating income. In the second step, the financial manager has the choice to use the Financial leverage lever to increase the effect of each change in operating income on changes in earnings per share (EPS). In this case, also the company must pay attention to company liquidity. Liquidity is an indicator of a company's ability to pay all short-term financial obligations at maturity using available current assets [4]. In addition, liquidity is showing the ability of a company to fulfill its financial obligations that must be fulfilled immediately, or the company's ability to meet financial obligations at the time of collection [5]. The success and bankruptcy of the company become the duties and responsibilities of the company management depending on the policies that have been taken by the company management. Including decisions regarding financial policy to support every activity of the company in an effort to achieve its objectives, namely to maintain a stable level of profitability and analyze the factors that cause the level of profitability to decline in this last period. Based on the background stated above, researchers are interested in conducting research with the title "Analysis of the Effects of Operating Leverage, Financial Leverage, and Liquidity on Profitability in the Telecommunications Industry Registered on the IDX II. MATERIALS AND METHODS According to Gitman [6] there are several ways to measure the profitability of a company, namely Gross Profit Margin, Operating Profit Margin, Net Profit Margin, Earning Per Share, Return on Assets (ROA) and Return on Equity (ROE). This study is limited to only using the Return on Assets (ROA) ratio. ROA measures the profitability of a company. Return on Assets is a measure of the effectiveness of management in generating profits with available assets. The higher the rate of return produced, the better the company will be. According to Brigham and Houston [2] Operating leverage is how much fixed costs are used in operating a company. Irawati [7] states that operating leverage is the use of assets with fixed costs that aim to generate enough revenue to cover fixed and variable costs and can increase profitability. According to Keown, et al. [8] operating leverage is the ability of a company's EBIT to respond to sales fluctuations. The size of the operating leverage is calculated by DOL (Degree of operating leverage). Financial leverage is the use of funds in the form of long-term debt in the company's capital structure accompanied by the obligation to pay fixed costs in the form of loan interest in the hope of increasing company profitability, because in issuing funds for fixed cost needs the company can use assets, so the company can generate income sufficient. According to Keown, et al. [8] Financial Leverage is the practice of funding some of the assets of companies with securities that bear the burden of fixed returns in the hope of increasing final returns for shareholders. The size of the financial leverage is calculated by DFL (Degree of financial leverage), which is a comparison between the percentage change in EPS with the percentage change in EBIT. According to research conducted by Faizal Taufik Ibrahim [9], a company that has a high level of liquidity causes the company's profitability to be low. High liquidity reflects that the company's current debt is high. Companies with high liquidity mean the company prefers funding from debt. Companies that make funding come from debt, causing company profits to decrease because the company must pay the debt along with a fixed cost of interest. Liquidity is an indicator of a company's ability to pay all short-term financial obligations at maturity using available current assets [10]. From the many formulas above in this study the formula to be used is the Current Ratio formula. Current ratio is easily calculated. Besides that, the current ratio has a good bankruptcy prediction ability [8]. The study used a quantitative methodology by a descriptive verification method with an explanatory survey approach. Hypothesis testing in this study uses regression analysis with panel data that combines time series data and cross section data. Table 1 shows the definition of the operationalization variables. III. RESULTS This study uses five telecommunications companies listed on the Indonesia Stock Exchange as research samples. The sample selection was carried out using purposive sampling technique, where from the predetermined criteria selected as many as 5 companies with 35 samples of research observations. The data used in this study were taken from www.idx.co.id and www.bi.go.id so that researchers get data on the company's financial statements for the period of the research year, namely 2012-2018. So that it can be calculated the financial ratios used in research including operating leverage (DOL), financial leverage (DFL), liquidity (CR), and profitability (ROA). Fig.1, Fig. 2, Fig. 3, and Fig. 4 shows the corresponding financial statements. A. Test Results Statistics 1. Panel Data Regression Based on model testing that has been done by using Chow test and Hausman test in this study, the selected Fixed Effect Model with the following equation: Profitability (ROA) = -0,626611 + 0,002787DOL + 0,019637DFL -0,856470CR + ɛ From the multiple regression models above it can be explained, that: Classical Assumption Test Based on the calculation of autocorrelation in this study it was found that the value of the Durbin-Watson statistic in other words 1,100 or approaching 2 the residual value has no autocorrelation. Based on the calculation multicolinearity test result that variable DOL, DFL, and CR have a tolerance value of more than 0.1 and VIF is not more than 5, it can be concluded that in this study did not happen multicollinearity between independent variables. Based on the data that has been processed, the data does not describe a systematic pattern that does not indicate there is heteroscedasticity in the data in this study. B. Hypothesis Testing Results 1. Results of Testing Hypotheses with the Statistical t Test From the random effect regression model, the ttest is obtained as follows: From the results of the calculation of the random effect regression model, it is obtained the value of each independent variable that will be tested by using ttest (tstatistic) as well as the ttable for α = 0,05 with degrees of freedom (df = n-2) 33 obtained ttable = 1,69236 Effect of Operating Leverage on Profitability Based on panel data regression analysis calculations obtained: significance value 0,6491 > 0,05. Then, with a 95% confidence level that the operating leverage (DOL) is not significant positive effect on profitability (ROA) in the telecommunications company listed in Indonesia Stock Exchange in 2012-2018. Effect of Financial Leverage on Profitability Based on panel data regression analysis calculations obtained: significance value 0,6066 > 0,05. Then, with a 95% confidence level that the financial leverage (DFL) is not significant positive effect on profitability (ROA) in the telecommunications company listed in Indonesia Stock Exchange in 2012-2018. Effect of Liquidity on Profitability Based on panel data regression analysis calculations obtained: significance value 0,0087 < 0,05. Then, with a 95% confidence level that the liquidity (CR) have a significant negative effect on profitability (ROA) in the telecommunications company listed in Indonesia Stock Exchange in 2012-2018. Table 4 shows the determination coefficient results and the test F statistics. Based on the results of calculations using the random effects regression model equation in table 3.2 obtained Adjusted R-squared = 0.481094, meaning that the effect of all independent variables simultaneously on the dependent variable is 48.1094% and the rest is influenced by other factors not included in the research model. The F-statistic value in Table 3.2 is 12.07418 and the Fstatistic probability is 0.001700. So from the results of simultaneous testing the F-statistic probability (0.001700) Advances in Economics, Business and Management Research, Series volume number 141 < α (0.05), which means the operating leverage, financial leverage, and liquidity variables simultaneously have a significant effect on the profitability variable. IV. DISCUSSION The success and bankruptcy of telecommunications companies become the duties and responsibilities of the company's management, which depends on the policies that have been taken by the management of the company to support every activity of the company in an effort to achieve the goal of maintaining a stable level of profitability. Investors need to pay attention to the effectiveness of the company in managing its assets in generating profits. The results of this study are expected to provide useful information for investors in making the right decisions regarding their investments by taking into account financial ratios measured by ROA, DOL, DFL, and Current Ratio. A. The Effect of Operating Leverage on Profitability Based on data analysis and hypothesis testing conducted in this study, it can be seen that the operating leverage factor projected with DOL has no significant positive effect on profitability that is projected by ROA. A positive relationship between DOL and ROA means that an increase in operating leverage will be followed by an increase in company profitability. This study has the same results with the research conducted [14], the size of profitability is influenced by fixed costs borne by the company. Operating leverage occurs every time the company has operating costs that must be closed, regardless of the units produced. The level of operating leverage of a company at an output level indicates the percentage change in profits due to changes in output that cause changes in earnings. But the magnitude of changes in sales is not comparable to changes in operating profits obtained by the company. Resulting in an insignificant positive effect between operating leverage on profitability. B. The Effect of Financial Leverage on Profitability Based on data analysis and hypothesis testing conducted in this study, it can be seen that the financial leverage factor projected by DFL has a significant positive effect on profitability that is projected with ROA. A positive relationship between DFL and ROA means that an increase in financial leverage will be followed by an increase in company profitability. This study has the same results with the research conducted [11], DFL which is the result of the calculation of financial leverage analysis has a high leverage to generate high profitability as well. And if profitability rises, it shows the ability of the capital invested in overall assets to generate profits for shareholders. Vice versa, if DFL decreases the leverage to generate ROA will also decrease. This means that profits available to shareholders have decreased. If ROA increases, in addition to increasing the welfare of shareholders, it will also create high confidence in the company's success in managing the company and can also attract new investors to invest in the company. C. The Effect of Liquidity on Profitability Based on data analysis and hypothesis testing conducted in this study, it can be seen that the liquidity factor that is projected by CR has no significant negative effect on profitability that is projected by ROA. The negative relationship between CR and ROA means that the decrease in liquidity will be followed by an increase in the company's profitability. This research has the same results as the research conducted by Faizal Taufik Ibrahim [9], a company that has a high level of liquidity causes the company's profitability to be low. High liquidity reflects that the company's current debt is high. Companies with high liquidity mean the company prefers funding from debt. Companies that make funding come from debt, causing company profits to decrease because the company must pay the debt along with a fixed expense, namely the interest expense. To face increasingly fierce telecommunications competition, companies invest more in fixed assets. As a result this will reduce the level of profitability of the company, because fixed assets will produce results that require a long time or not a moment. This is that high liquidity is not always profitable because it has the opportunity to generate funds that are unemployed which can actually be used to invest in projects that benefit the company. V. CONCLUSION Operating leverage, financial leverage, and liquidity simultaneously affect the profitability of telecommunications companies listed on the Indonesia Stock Exchange in the 2012-2018 period. Operating leverage (DOL) does not have a significant positive effect on profitability (ROA) which explains that every increase in operating leverage (DOL) will increase profitability (ROA). Financial leverage (DFL) has not a significant positive effect on profitability (ROA) which explains that every increase in financial leverage (DFL) will increase profitability (ROA). Liquidity (CR) has a significant negative effect on profitability (ROA) which explains that any increase in liquidity (CR) will reduce profitability (ROA). The concept of operating and financial leverage is useful for financial analysis, planning and control. However, operating and financial leverage are proven to have no significant effect on the profitability of telecommunications companies, so that the size of the profitability received by telecommunications companies is not influenced by business risk and financial risk. In addition, the management must maintain the company's liquidity needs, because it has been proven that liquidity has a significant effect on company profitability. The value of liquidity that is too high has an unfavorable Advances in Economics, Business and Management Research, Series volume number 141
2020-05-21T09:18:26.800Z
2020-05-18T00:00:00.000
{ "year": 2020, "sha1": "96f2aaaabd1b0e0ff3a035022355bc3d4a5e2dc7", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.2991/aebmr.k.200514.025", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "9c27f6b5fc60e83d132e2017c3d0baa9377a2f9c", "s2fieldsofstudy": [ "Business", "Economics" ], "extfieldsofstudy": [ "Business" ] }
244798528
pes2o/s2orc
v3-fos-license
Preservation of neurocognitive function in the treatment of brain metastases Abstract Neurocognitive function (NCF) deficits are common in patients with brain metastases, occurring in up to 90% of cases. NCF deficits may be caused by tumor-related factors and/or treatment for the metastasis, including surgery, radiation therapy, chemotherapy, and immunotherapy. In recent years, strategies to prevent negative impact of treatments and ameliorate cognitive deficits for patients with brain tumors have gained momentum. In this review, we report on research that has established the efficacy of preventative and rehabilitative therapies for NCF deficits in patients with brain metastases. Surgical strategies include the use of laser interstitial thermal therapy and intraoperative mapping. Radiotherapy approaches include focal treatments such as stereotactic radiosurgery and tailored approaches such as hippocampal avoidant whole-brain radiotherapy (WBRT). Pharmacologic options include use of the neuroprotectant memantine to reduce cognitive decline induced by WBRT and incorporation of medications traditionally used for attention and memory problems. Integration of neuropsychology into the care of patients with brain metastases helps characterize cognitive patterns, educate patients and families regarding their management, and guide rehabilitative therapies. These and other strategies will become even more important for long-term survivors of brain metastases as treatment options improve. include radiation therapy, surgery, chemotherapy, and newer techniques such as immunotherapy or laser interstitial thermal ablative therapy. While the goal of these therapies is to improve progression-free and overall survival, they can also cause brain injury leading to neurocognitive dysfunction. Understanding the mechanisms of how these therapies can damage healthy brain tissue is vital for educating our patients and their caregivers about these impairments and developing strategies to mitigate or prevent the injury. Declines in NCF are associated with a significant impact on the individual's quality of life (QOL), including losing the ability to perform a job, safely operate a motor vehicle (estimated at over 40% in one study 7 ), take care of a family, manage a household, and even care for oneself. [8][9][10] In a study examining the relationship between NCF, QOL, and activities of daily living (ADLs), 11 NCF deficits were strongly associated with problems with ADLs and QOL in patients with brain metastases who were treated with whole-brain radiation therapy (WBRT). After treatment, declines in NCF preceded and predicted ADL and QOL deterioration, which occurred in a substantial percentage of brain metastasis patients (40% and 34%, respectively). The NCF deficits experienced by patients with brain metastases constitute a threat to the individual in a way that is distinct from the other symptoms of cancer, striking at their identity and sense of independence, 12 undermining their sense of productivity and meaning in life, financial security, and potentially limiting their access to care if health insurance coverage through an employer is lost. Many patients with brain metastases have a poor prognosis, whereas others are benefiting from new treatments that are extending progression-free and overall survival. In either case, it is incumbent upon treatment providers to optimize the patient's QOL, providing the best possible treatment of disease and minimizing the deleterious impact of those treatments on NCF. Mechanisms of Brain Injury and Associated Neurocognitive Function Decline When considering radiation therapy for the treatment of brain metastases, one can focus on delivery techniques. Classically, WBRT has been employed for disseminated metastases or prophylactically for cancers such as small cell lung carcinoma. As it has been recognized that WBRT can lead to moderate to severe neurocognitive dysfunction, more modern treatment paradigms have shifted to using stereotactic radiosurgery (SRS) to target solitary and oligometastatic disease, which reduces the risk of NCF impairment without compromising progression and survival endpoints. 4 RT fundamentally leads to DNA damage via the generation of reactive oxygen species. In addition to the production of DNA double-strand breaks and single-strand breaks in tumor cells, all components of healthy cells are damaged by radiation. The primary mechanisms of this damage are likely activation of inflammation of the neural tissue along with activation of microglia, which represent CNS-derived macrophage-like cells. 13 Key mediators of neurotoxic inflammation include tumor necrosis factor-alpha (TNFα) and interleukin-8 (IL-8). This inflammation can occur during or immediately after RT, leading to the acute cognitive side effects and the chronic white matter loss seen in patients exposed to RT with late loss of microglia. 14 Radiologically, these phenomena can manifest as acute inflammation during radiation therapy and widespread leukoencephalopathy years after radiation therapy is completed. 15 In concert with increased neuroinflammation, radiation disrupts the normal functioning of neural progenitor cells, particularly in the region of the hippocampus, a neural structure of the temporal lobe that is critical for learning and memory. 16 Historically, neurocognitive decline has been described as occurring at different stages during and after treatment with WBRT: acute changes that occur within days of initiation of treatment, subacute changes evolving in the weeks after treatment through the first few months after completion, and chronic/progressive changes, which generally begin to appear about 6 months after treatment and lead to inexorable deterioration. More recently, randomized trial data from patients undergoing treatment for brain metastases with or without the use of WBRT have demonstrated that many patients show declines in memory and executive functioning by 4 months after treatment. 4,17 Clinically, these deficits may manifest as "forgetfulness, " with patients requiring lists and reminders, and often dependence on caregivers to compensate for these deficits and their impact on ADLs. As a considerable minority of patients are long-term survivors following WBRT, strong data are lacking regarding more long-term neurocognitive outcomes, however, clinically the usual course is of plateauing or slow decline in memory, with more dependence on caregivers for ADLs. Prevention and Treatment of Neurocognitive Decline From RT Recognizing the importance of RT in the management of brain metastases, diverse strategies have been explored to mitigate the cognitive morbidity of treatment. The goal of these strategies has been to achieve equivalent disease control while reducing cognitive side effects by using strategies such as focal irradiation, avoidance of critical neural structures, and neuroprotectant therapies. Promising results from early studies suggested that individuals with brain metastases treated with focal RT such as SRS ( Figure 1A) have better neurocognitive outcomes compared to those treated with SRS plus WBRT. 17 The pivotal multicenter phase III clinical trial 4 included 213 individuals with 1-3 brain metastases. Subjects were randomized to either receive SRS or WBRT plus SRS. Patients who received only SRS had better NCF outcomes both at an early time point (the proportion of patients experiencing cognitive deterioration at 3 months was 19% after SRS alone compared to 46% with SRS plus WBRT) and for long-term survivors (at 12 months, 43% of WBRT plus SRS patients had deterioration on an executive function task, compared with 0% of the SRS alone patients). Although the WBRT group had lower rates of brain metastasis recurrence (distant control), there was no significant difference in overall survival (hazard ratio, 1.02; 95% CI, 0.75-1.38; P = .92). Parsons et al. Cognition and brain metastases Given the frequency of memory impairment after WBRT 18 as well as the role of the hippocampus in neuronal regeneration and plasticity, an additional neuroprotective strategy has been to modify treatment to selectively avoid the hippocampal region (hippocampal avoidance; HA-WBRT; see Figure 1B and C). 19,20 A phase II single-arm study evaluating HA-WBRT found improved outcomes on tests of memory performance in comparison to an expected level of impairment based on historical outcomes. 21 The use of pharmacological agents to protect against radiation-induced cerebral injury has been an additional strategy to reduce neurocognitive morbidity. Memantine, an N-methyl-d-aspartate (NMDA) receptor antagonist that reduces harmful excessive stimulation of NMDA receptors has been shown to reduce neurotoxicity of radiation therapy. 22, 23 Brown et al. 24 found those who received memantine during WBRT showed longer time to cognitive decline than those who did not receive memantine. The memantine group also had stronger performance on measures of executive functioning 16 weeks later and better processing speed and delayed recognition at 24 weeks. 24 More recently, the combined use of memantine and HA-WBRT 25 was demonstrated to further reduce the frequency of NCF decline in a large phase III clinical trial. Mechanisms of Brain Injury and Associated Neurocognitive Function Decline The impact of a metastatic tumor on the CNS relates to the overall burden of disease in the brain, most Neuro-Oncology Advances accurately quantified as the overall volume of brain disease. 5 Concomitant neurologic sequelae, including edema, seizures, and headaches, can also contribute to NCF dysfunction. Furthermore, the medications to treat these complications can have adverse effects on neuronal function. While neurological injury leading to NCF decline is possible during neurosurgical procedures 26 due to trauma to the local healthy brain tissue, to the extent that resection reduces mass effect, edema, or disruption of CSF flow, surgery can lead to improved performance status maintained over a longer period of time. 27 Similar to radiation-induced damage, studies have found that after brain surgery, rats and mice demonstrate increased neural tissue inflammation with resulting induction of TNF-α and IL-8. 28,29 Additionally, brain-derived neurotrophic factor (BDNF) is reduced, with a consequent decrease in hippocampal neurogenesis. These changes are accompanied by NCF impairments in murine behavioral tests. 30 Xin et al. have shown that by inhibiting proinflammatory signaling pathways, in particular nitrous oxide (NO) pathways, one can rescue postoperative cognitive dysfunction in mice and rats. 31 Developing methods to limit neuroinflammation after surgery has the opportunity to provide protection against and mitigation of neurocognitive dysfunction. Minimizing the NCF Risk of Neurosurgery in Patients With Brain Metastases Neurocognitive risk of surgery is greatest when brain metastases arise near eloquent areas, particularly speech/ language and memory-related areas in the dominant hemisphere. 32 Technological advances, such as intraoperative mapping of cognitive function during awake craniotomy, provide an opportunity for the monitoring of NCF during the procedure. Although awake craniotomy is more commonly employed in the resection of primary brain tumors, a recent review found that awake craniotomy for brain metastases was a viable option to reduce cognitive morbidity. 33 The review showed that surgery in/near eloquent cortex leads to increased risk of postoperative neurocognitive deficits as compared with surgeries farther from eloquent regions; however, 73% of patients undergoing awake craniotomy were not found to have a decline on a brief bedside neurologic exam conducted by the surgeon. Of those who had a decline in the acute postoperative period, 96% showed subsequent improvement and recovery. Laser interstitial thermal therapy (LITT) involves neurosurgical stereotactic placement of a laser probe that kills tumor tissue with heat. 34 Heating above 50°C leads to cell death. LITT is now becoming more widely used in patients with brain metastases. Similar to the injury induced by traditional neurosurgical procedures, anatomical location and proximity to eloquent areas or areas involved in cognition are vital to understand how LITT could impact NCF. One can glean information on long-term cognitive data by looking at the use of LITT in epilepsy patients. Small nonrandomized studies point out that memory decline can occur in patients with dominant medial temporal lobe epilepsy who underwent LITT involving the hippocampus, 35 but these patients were spared the language declines often seen in patients who undergo standard resective surgery for this condition. In patients with brain metastases, where the target of LITT is not functional neural tissue, the hope is that this approach could lead to reduced neurocognitive morbidity in difficult to reach areas of the brain compared with resective surgery. A study of 39 patients, 20 of whom had brain metastases (19 more had radiation necrosis) by Ahluwalia et al. demonstrated no reduction in neurocognitive performance. 36 The role of LITT in patients with brain metastases continues to be explored. Balancing the risks and benefits of surgery along with application of these new techniques in eloquent areas will continue to be the aim for neurosurgical procedures in brain metastases patients. Chemotherapy In concert with other therapeutics, use of systemic agents in patients with brain metastases is expanding rapidly with the list of FDA approved targeted therapies enlarging. The trend for chemotherapy in treatment of brain metastases lies prominently with targeted agents. Regardless, there is continued use of agents such as methotrexate, capecitabine, and paclitaxel, all of which have been found to affect NCF. For example, neurotoxicity of methotrexate therapy has been well demonstrated in adult CNS lymphoma patients 37 and adverse neurocognitive outcomes have been found in survivors of treatment for childhood leukemia. 38 Capecitabine effects on neurocognition have been reported but are generally mild, 39 while paclitaxel effects have also been of concern, given anti-microtubule mechanism and known association with peripheral neuropathy and acute encephalopathy. 40 An extensive literature demonstrates the neurotoxicity of these and many other traditional chemotherapy agents (for a recent review, see Dietrich 41 ), which may manifest as acute or subacute neurocognitive syndromes or more subtle cognitive deficits that are longer lasting (eg, chemobrain). Multiple mechanisms have been proposed for these effects, including inflammatory mechanisms, direct cellular toxicity, myelin damage, and loss of hippocampal neurogenesis. 42 Patients with brain metastases receiving high-dose or intrathecal methotrexate are further at risk for the development of methotrexate-induced leukoencephalopathy, demonstrated by NCF impairment in the setting of T2/ FLAIR hyperintensities on magnetic resonance imaging, particularly when combined with radiation therapy. 43 In preclinical rat and murine models, exposure to methotrexate (with or without 5-fluoruracil) leads to NCF dysfunction, which was attributable to loss of neurons in the hippocampus and frontal lobes. 44 More recent work demonstrates that damage to pathways involved in oligodendrocyte integrity and adaptive myelination via BDNF signaling is responsible for cognitive impairment in mice exposed to methotrexate. 45 v100 Parsons et al. Cognition and brain metastases Hormonal Therapy While hormone-based therapies are not classic chemotherapy, these treatments are commonly used in the treatment of breast and prostate cancer and may contribute to cognitive impairment. 46 Theoretically, these effects reflect the ubiquitous expression of estrogen and androgen receptors in key areas of the brain involved in cognition, such as prefrontal cortices and the hippocampus. 47 A recent review of this literature 46 concluded that there is evidence of cognitive impairment in patients with breast or prostate cancer after treatment with hormonal therapies. A longitudinal prospective study followed women with breast cancer who either were or were not treated with hormonal therapies found no difference in cognitive symptoms or performance up to 6 years posttreatment, 48 though this analysis was limited to group comparisons and may have missed possible individual differences in response to therapy. In men undergoing androgen deprivation therapy (ADT) for prostate cancer, a longitudinal prospective study showed no differences in cognitive performance over a 3-year period between patients who were treated with ADT, patients who did not need such treatment, and healthy controls. 49 However, a similar study found increased risk for cognitive decline in patients treated with ADT and identified a genetic risk factor that appeared to markedly increase risk in a subset of patients. 50 Thus, it appears that there are cognitive risks associated with hormone therapies and additional research is needed to identify the relevant risk factors and longer term outcomes. Immunotherapy Immunotherapy has revolutionized the treatment of patients with melanoma and other cancers that regularly send metastases to the central nervous system; however, encephalitis due to autoimmune induced inflammation in the brain can lead to both acute and chronic neurologic impairments. 51 The long-term implication for NCF function is best determined when one considers concomitant use of radiation therapy. McGinnis et al. developed preclinical models in mice exposed to immunotherapy, particularly immune-checkpoint inhibitors (ICIs) with and without concomitant radiation therapy. 52 All mice that received the combination of immunotherapy and radiation developed cognitive impairment and notably, this cognitive impairment was in the setting of tumor control. Mechanistically, microglia appeared to be activated by immunotherapy, with or without concomitant radiation. Identification of Risk for Cognitive Decline Despite efforts to reduce the cognitive risks of therapy detailed above, many patients with brain metastases will experience cognitive symptoms during the course of their disease, requiring a comprehensive clinical strategy for management. 3 Identification of individuals at greatest risk of NCF morbidity is an important aspect of treatment decision making. Advanced age and a higher degree of pretreatment leukoencephalopathy are both associated with a greater risk of cognitive dysfunction in patients with brain metastases who received WBRT, 53,54 suggesting that health of the underlying brain tissue contributes to NCF risk in these patients. Researchers have initiated studies into the genetic factors associated with the risk of NCF dysfunction in patients with primary brain tumors and those who received treatment for CNS and non-CNS cancers. Variations on a theme that implicates apolipoprotein E (APOE), a gene related to the risk of Alzheimer's disease, are present in the literature pertaining to cancer related cognitive impairment (CRCI) in breast cancer patients. 55 Unfortunately, the literature is at odds as to whether APOE genotype is meaningful in predicting risk of CRCI. Preliminary work suggests that those with a high-risk APOE genotype experience greater cognitive decline when they undergo WBRT than do those with a lower risk genotype. 56 Correa et al. 57,58 showed that specific SNPs in catechol-O-methyl transferase (COMT), BDNF, and dystrobrevin-binding protein 1 (DTNBP1) genes can be associated with dysfunction in a myriad of cognitive domains. How these translate specifically to patients with brain metastases remains to be determined. Evaluation and Monitoring Monitoring NCF in the neuro-oncology clinic is difficult for clinicians because brief screening measures, such as the Mini-Mental State Examination (MMSE 59 ) and the Montreal Cognitive Assessment (MoCA 60 ) have only modest ability to detect symptoms in brain metastasis patients. 61,62 Thus, a careful clinical inquiry regarding these symptoms is an important first step in assessing cognitive function during clinic visits. Self-report surveys can be used to inquire about subjective NCF changes in the context of QOL assessment with scales such as the Functional Assessment for Cancer Therapy-Brain (FACT-Br 63 ) and the European Organization for Research and Treatment of Cancer Quality of Life Scale (EORTC-QLQ-C30 64 ), including newly developed metrics that are moderately correlated with cognitive complaints. 65 Neuropsychological (NP) evaluation is the most sensitive method of identifying cognitive dysfunction in patients with cancer, and specifically sensitive tests has been recommended by the International Cognition and Cancer Task Force (ICCTF). 66 Neuropsychological evaluations have been flexibly integrated in the neuro-oncology clinic, including in metastatic brain tumor boards. 67 These evaluations, often abbreviated to minimize burden on the patient, 68 are sensitive to NCF changes and can detect progression of brain metastases prior to MRI. 69 Integrating NP evaluations in the care of patients with brain metastases provides an understanding of NCF and related symptoms, recommendations for treatment, and guidance to the patient and family and is recommended in the most recent guidelines issued for CNS cancers by the National Comprehensive Cancer Network (NCCN; Section Brain E). 70 Neuro-Oncology Advances Pharmacotherapy Medications used to treat cognitive symptoms have been trialed in brain tumor patients. In addition to the neuroprotectant role of memantine detailed above, pharmacologic agents used in the treatment of memory impairment and attention deficits in other neurologic populations have been studied. Most of this research has been in patients with primary brain tumors, but a few studies have also included patients with brain metastases. An early study of the memory enhancer donepezil in a mixed group of patients with brain tumors had only one brain metastasis patient at baseline, who failed to complete the follow-up assessments, 71 illustrating the difficulties of studying treatment outcomes in this patient population with such dismal survival. The largest study of donepezil in patients with brain tumors (n = 198) was a randomized placebo-controlled trial that included 53 patients (25%) with brain metastases 72 and measured cognitive effects at 12 and 24 weeks of treatment. In the 74% of patients who completed follow-up visits (% of brain metastases not reported), there were subtle indications of a treatment effect on one measure of recognition memory. Attention-enhancing medications, such as methylphenidate and modafinil, have been evaluated in mixed groups of brain tumor patients, though these studies too have largely excluded patients with brain metastases. Although early studies of this approach suggested some benefits, 73 randomized placebo-controlled trials failed to replicate the findings, including the only study to include patient with brain metastases, 74 suggesting that expectancy effects may play a significant role in the experience of patients prescribed these medications. It should be noted that this study evaluated fatigue, rather than cognitive function, as the primary endpoint. Rehabilitative Therapy Cognitive rehabilitation is the use of therapeutic strategies to minimize the impact of NCF deficits on everyday functioning and/or improve cognitive function, which may include education in compensatory strategies as well as massed practice of cognitive exercises intended to provide neurocognitive stimulation. The majority of these studies have evaluated cognitive rehabilitation in patients with primary brain tumors and have suggested some benefit to those patients who receive training in specific cognitive strategies, such as the use of mnemonic strategies for memory problems 75 and goal management training for executive function problems. 76 Other approaches, such as combining the training of compensatory strategies with "cognitive exercise" activities have shown at least partial benefit in randomized controlled studies. 77 Studies using remote methods (eg, telephone, computer) to deliver rehabilitation have also shown promise, 78 including a method for cognitive stimulation leading to improved NCF test performance. 79 To date, only two cognitive rehabilitation studies have included patients with brain metastases, both of which used variations of cognitive exercise training 79,80 These small studies reported positive impacts of cognitive training but are limited to some extent by lack of a control group 80 and small sample size. 79 While these studies are opening doors for new methods, it has yet to be demonstrated that improvements on NCF tests or computerized exercises translate to benefits in the real world. Additional approaches to improve cognition in patients with brain tumors have included physical rehabilitation, 81 which demonstrated a positive effect on MMSE scores in patients with brain tumors (including metastatic) in the weeks after surgery. There are hopes that other strategies such as exercise 82 and herbal strategies 83 may prove to be helpful, though the prevailing view in the field is that findings are too preliminary to form the basis of recommendations at this time. 84 Future Directions The continued need for radiation therapy in the treatment of brain tumors and the increasing prevalence of brain metastases drives research into improving NCF for patients receiving brain radiotherapy. Model systems spanning the in vitro and in vivo spaces, including conventional and 3D culture systems, lend themselves to exploratory studies for neuroprotectors, while small animal radiation platforms and adaptations of clinical radiation therapy equipment 85,86 enable in vivo validation and testing of candidate genes and drugs, as well as histopathology and imaging studies. [87][88][89] Perhaps most importantly for preclinical research, elegant work to refine behavioral and neurocognitive testing in laboratory animals, often aided by complementary research from fields such as Alzheimer's disease, developed assays such as the novel object recognition test and Morris water maze and correlative functional tests such as roto-rod to presage neurocognitive endpoints in humans. 90,91 Numerous clinical trials are currently studying a wide range of approaches to improve cognitive outcomes in brain metastasis patients (see Table 1). On the drug discovery front for neurocognitive preservation in patients receiving RT, promising results from preclinical studies of several classes of small molecules have led to subsequent clinical trials, while more recent results hold promise for the future. Preclinical strategies to block the cytotoxic effects of radiation-induced NMDA channel activation contributed to the successful use of memantine. 24 Combinations of memantine with AMPA receptor inhibitors are now planned for patients with primary brain tumors receiving RT 92 and could soon extend to patients with brain metastases. Recently, development of manganese porphyrin compounds that alter the redox biology of mitochondria has generated interest as possible dual tumor radiosensitizers and normal tissue radioprotectors. 93 Preclinical findings of preserved tumor control and neuroprotection with enhanced cognitive function following irradiation in mice spawned clinical trials in several cancer types, including brain metastases 87 (NCT03608020). The role of GSK-3beta inhibition as a general neuroprotection strategy that prevents radiation necrosis 88 is also exciting, and some clinical trial data exist for tideglusib in Alzheimer's disease v102 Parsons et al. Cognition and brain metastases patients. 94,95 Many other exciting data for novel drugs that target pathways such as hedgehog signaling 96 and modulation of the complement cascade 97 also show potential to improve neurocognitive outcomes in brain metastasis patients receiving radiotherapy. Another strategy toward improving neurocognitive outcomes in patients receiving brain-directed RT leverages advanced radiation therapy techniques. Some of these techniques are in use currently and involve precise control over radiation dose deposition such that anatomical regions of the brain are spared damage. As noted above, HA-WBRT and SRS have shown neurocognitive benefit 4,25 and further technical advances in the administration of SRS simultaneously to multiple target lesions promises to expand this technique for more patients with a high burden of brain metastases (NCT02886572). Upcoming trials will further differentiate the advantages of these approaches, and results are anxiously awaited (NCT03550391). While these techniques utilize the most advanced radiation therapy technologies that are in current clinical use, a newer technique has recently emerged that seeks to maximally exploit the fundamental differences in radiation biology responses that distinguish tumor from normal tissue. "Flash" radiation therapy employs ultra-high-dose rate radiation delivery (40-100 Gy/s) to harness a theoretical difference in normal tissue responses to radiation that has implications for radiotherapy to multiple areas of the body, including the brain. 98,99 Although this approach is not yet available for widespread use, the first clinical tests of this technology and development of clinical instruments look promising. 100,101 Lastly, biotechnology strategies that push the limits of current science focus on radiation-induced loss of neural stem cells in the hippocampus. 16 Neural stem cell transplantation is theoretically possible, and with current stem cell technologies one can contemplate autotransplant of a patient's own induced neural stem cells. Preclinical studies indicate that this approach could be beneficial 102,103 and might be warranted in the increasingly plausible case that long-term cancer control in brain metastasis patients is attainable. Summary Neurocognitive sequelae are an unfortunate reality for most patients with brain metastases, which can be caused by the metastatic tumors, treatment for the systemic disease, and treatment directed at the brain. Numerous advances over the past two decades, including neurosurgical techniques, focal delivery of radiotherapy, and neuroprotectant strategies, have reduced the negative impact on the brain. Integration of NP assessment in the routine care of patients with brain metastases allows for monitoring of cognitive outcomes and tailoring of treatment. Rehabilitative therapies and pharmacologic treatment of cognition are useful options for patients. As therapeutic options for cancer and brain metastases continue to improve, the focus on neurocognitive outcomes of the long-term survivors will become even more important. Case Example The following case example illustrates the multiple opportunities to integrate many of the techniques we have described in the clinical care of a patient with brain metastases to optimize cognitive outcome. The patient, a 67-year-old Caucasian man, initially developed a mass on the left upper back and underwent resection, with pathology confirming malignant melanoma. He had been treated with combination immunotherapy (ipilimumab + nivolumab) for approximately 3 months when he developed altered mental status. Brain imaging at the time was unrevealing, and the patient was suspected to be experiencing ICI-related encephalitis. He underwent treatment with intravenous immunoglobulin and neurorehabilitation during a 2-month hospitalization and ultimately recovered, though experienced a slightly reduced level of cognitive functioning compared with his normal baseline. ICI therapy was discontinued. Unfortunately, about 6 months later, surveillance brain imaging showed Abbreviations: NCF, neurocognitive function; SRS, stereotactic radiosurgery; WBRT, whole-brain radiotherapy. a subcentimeter enhancing lesion in the left lateral temporal lobe, which was felt to represent melanoma metastasis ( Figure 2A). Based on the literature demonstrating adequate local control and reduced cognitive morbidity with SRS as opposed to WBRT, the patient was treated with single fraction SRS (18 Gy). At the time of SRS, the patient reported persistent changes in his cognitive function compared with his usual baseline. Neuropsychological evaluation was requested at that time. During the interview, the patient reported difficulties with concentration and the ability to hold information in mind while multitasking (eg, working memory). He also described deficits in recent memory, such as forgetting conversations or things that he had agreed to do. He and his wife felt that these problems had been present since he recovered from ICI encephalitis and had not changed significantly since the new development of brain metastasis or SRS treatment of that lesion. The NP evaluation demonstrated that the patient was a man of above average premorbid ability who was experiencing relative deficits in aspects of attention, including lower than expected encoding of new information into memory ( Figure 3), likely reflecting mild long-term sequelae of his protracted encephalitis. The patient participated in a feedback session in which he and his wife integrated the cognitive information with daily goals. Numerous strategies to improve memory encoding were recommended. The patient felt confident in his ability to independently integrate these recommendations in his workplace, was functioning well at home, and opted not to pursue cognitive rehabilitation therapy. Over the ensuing year, the patient was treated with imatinib and systemic melanoma was well controlled. Unfortunately, approximately 1 year after SRS, the left temporal lesion showed increased size and contrast enhancement as well as intratumoral hemorrhage and increased surrounding edema in the left temporal lobe ( Figure 2B). It was unclear whether these changes reflected radiation necrosis or recurrent melanoma. In the multidisciplinary brain metastasis tumor board, neurosurgery, radiation oncology, hematology-oncology, and neuropsychology specialists agreed that surgical resection was indicated if it could be accomplished with minimal cognitive morbidity. Neuropsychological re-evaluation was conducted and showed significant declines in memory and aspects of language compared with the assessment that had been conducted 1 year earlier (Figure 3). These findings suggested that the increased size of the lesion and surrounding edema was indeed affecting NCF. In consultation with the patient, the decision was made to resect the lesion. A functional magnetic resonance imaging study was conducted to identify foci of critical language activity in the left hemisphere and identified a site of putative receptive language function ~1 cm from the lesion boundary. Diffusion tensor imaging was conducted during the same MRI session to identify critical fiber tracts and demonstrated the location of the arcuate fasciculus passing within 7 mm of the mass. These imaging studies were fused with the structural imaging in the surgical navigation software. Intraoperative mapping of language function demonstrated an area 1 cm superior to the lesion in which stimulation produced deficits v104 Parsons et al. Cognition and brain metastases in language comprehension. The cortex overlying the lesion was tested with stimulation mapping, and no changes were elicited in comprehension, naming, or reading. A gross total resection was achieved with pathology demonstrating recurrent melanoma. The patient remained awake throughout the procedure and demonstrated no gross deficits postoperatively. The patient was seen for a repeat NP evaluation ~1 month after surgery. At that point, he reported good recovery of function and had returned to his part time professional role as a technical advisor to a biotechnology firm. He and his wife reported improved memory compared with the preoperative time point but acknowledged increased fatigue and reduced cognitive endurance since surgery. The evaluation demonstrated a significant improvement in memory as compared with the preoperative assessment, returning to the level of performance seen 14 months prior ( Figure 3). There were also improvements in confrontation naming and verbal fluency, though there was a decline in phrase repetition. The patient participated in a short course of speech/language and cognitive rehabilitation therapy. During therapy, he developed additional strategies to support memory and assist with word-finding difficulties. At the time of this submission, he continues to function effective in his job and is fully independent in ADLs. At the most recent follow-up visit, he reported good overall QOL and denied difficulties in subjective cognitive function. This case demonstrates the potential for positive outcomes from a multidisciplinary process that integrates cognitive outcomes in the management of brain metastasis. . Graphical representation of neuropsychological test data in multiple domains, reported as standard scores compared with healthy controls matched for age and, where appropriate, education level. A Z-score of 0 represents the middle of the average range. Based on premorbid estimates of functioning, the patient's expected level of functioning was above average. The initial evaluation showed slight reductions from expected levels of functioning. At the time of tumor progression, the patient showed a substantial decline in recent memory for both verbal and visuospatial information, with more subtle declines in other domains. Postoperative testing showed a marked improvement in memory performance, with return to the baseline level of functioning.
2021-12-03T05:20:19.746Z
2021-11-01T00:00:00.000
{ "year": 2021, "sha1": "146fbcd42e2aca7ddf6eeaa529c960481ce91033", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/noa/article-pdf/3/Supplement_5/v96/41355750/vdab122.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "146fbcd42e2aca7ddf6eeaa529c960481ce91033", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [] }
73662921
pes2o/s2orc
v3-fos-license
BUSINESS PROCESS MANAGEMENT NOTATION FOR A COSTING MODEL CONCEPTION Purpose Aims to show the importance of business processes modeling as a precondition for information system design. Although many managers worry the entity’s expenses, some are unaware of the processes and procedures adopted by his subordinates. Searching is to calculate spending on each step to a proper business process management. Design/methodology/approach It shows the concepts of Activity-Based Costing (ABC) and its update, the TDABC (TimeDriven Activity-Based Costing), to support the development of a costing system for public universities. Findings It can be concluded that public processes implemented in public services are both complex and bureaucratic, mainly due to regulations. The bidding procedure of acquisition materials or services demand eight sectors activities. Research limitations/implications The contribution of this study is to present how the business process modeling should not be applied to public service for the optimization of resources. This research presents the usual flow of bids, but in practice there is some variation. Practical implications The article provides a starting point for process redesign and complex and inefficient procedures in order to reduce costs. Originality/value Although the cost accounting to present different types of costing methods, managers need to know and criticize the existing business processes. INTRODUCTION The responsibility of a public university is not only teaching, but also provide extension activities and resources for research of specific areas. As part of the public administration, entities are under excessive rules and regulations, either in Brazil or in the United Kingdom, as shown by some researches (Andrews et Boyne, 2014). In addition to legal enforcement, public expenditure control is necessary for a proper allocation of key resources in areas of the Brazilian government (Araújo, 2011). In Brazil, all spent conducted by the Public Administration needs to be approved by the legislature through an annual budget law. Beyond control, it is necessary to analyze the quality of public spending. Therefore, institutions need to know the costs of their activities. Among the various methods of costing international public universities have adopted ABC (activity-based costing) (European University Association, 2008). Before implementing a costing system in any organization, it is necessary to know and organize all processes and activities. The allocation of costs in areas with different products or services may lead the manager to a decision based on false information. The present study aims to show the importance of understanding and the model of the business processes and how a business modelling tool can help to build a costing system model for public universities. A case study was used to allow an exemplification of the processes to be modelled. Therefore, at the Federal Institute of Education, Science and Technology of São Paulo, we sought to understand bureaucratic complexity to the development of a costing system applied to public universities. We use in this paper the BPMN (Business Process Management Notation) to be the language adopted by the OMG (Object Management Group) as the standard language of business process modeling and adopted by many analysts (Recker, 2010). This article is structured into 4 parts: Section 2 provides a literature review. Section 3 provides an overview of the methodology. Section 4 provide a modeling business processes and it establishes the requirements that were presented by the Dean of Administration for notation and the tool used to define business processes and presents an analysis for solutions to be implemented for the development of a costing system. Finally, Section 5 summarizes the work presented and draws some conclusions about its development. LITERATURE REVIEW Activity-Based Costing (ABC) is a costing system developed in 1988 by Robin Cooper et Robert Kaplan, in order to allocate indirect costs to objects through cost drivers. It highlights three rules for their use. First, the concentration of expensive resources. Second, the emphasis on resources whose consumption varies significantly by product and type of product. Third, the concentration resources whose demand patterns are uncorrelated with traditional measures of hand direct manpower allocation, with the material processing time (Kaplan et Cooper, 1998). Kaplan et Cooper (1998) define the activity-based costing as an economic map of the expenses and the organization's profitability based on organizational activities. This costing system offers companies an economic map of its operation showing the existing and the projected cost of activities and business processes that, in turn, explains the cost and profitability of each product, service, and customer operation. Cropper et Cook (2000) conducted a study on the implementation of the costing system based on activities in UK universities in the second half of the 1990's and concluded that the deployment occurred in a slow, even with pressure from donors and government. The implementation of ABC involves time and resources. It requires organizational changes and employee acceptance. It also requires investments in information technology and materials for data collection. Even with all the human, material and physical implantation it does not guarantee satisfactory results in the short term (Roztocki et al., 2004). There is a conceptual proposal for use of ABC and its variation ABM (activity-based management), explained in the hexagon below, in order to improve decision-making operations. ABM is the way in which an entity can drive, measure and control the aim to improve their performance. Therefore, the creation and use of a performance assessment framework based on activities as the primary means of resource management, continuous improvement and decisionmaking is necessary (Evans et Ashworth, 1995). In surveys conducted by Kaplan et Anderson (2007) on the implementation of ABC, were noted the following problems: • The processes of interviews and survey were slow and expensive. • The data were subjective and difficult to validate. • The data were expensive to store, process and report. • Most deployments were local and not provide an integrated view of profit opportunities across the enterprise. • It could not be easily updated to adapt to new circumstances. • The model was theoretically incorrect when it ignored the potential of unused capacity. TDABC simplifies the costing process, eliminating the need to interview employees and search for cost allocation of resources to activities before allocating them to cost objects (applications, products and customers). The new model assigns resource costs directly to the cost objects, using an elegant structure that requires only two sets of estimates, none of which is difficult to obtain (Kaplan et Anderson, 2007). In a literature review were identified thirty-six empirical contributions using TDABC over the period 2004-2012. This costing method was applied in logistics, manufacturing, services, healthcare, hospitality and services nonprofit with potential benefits (Siguenza-Guzman et al.,2013) and university libraries (Pernot et al., 2007;Siguenza-Guzman et al., 2014). There are also applications in private schools (Yilmaz et al., 2013). This shows the concern in offering products or services with quality and fair prices. The companies need to know their business processes, whatever the model they decided to apply. In this context, Business Process Management (BPM) can assist in the process knowledge and documentation of all processes, activities and procedures of the organization. BPM has two main intellectual antecedents. The first is Deming (1952) et Shewhart's research (1986) on statistical process control. There are also the precursor of Six Sigma, improving and managements of processes. The second antecedent is the concept of reengineering business processes (Hammer, 1990;Hammer et Champy, 1993), which has positive and negative points, but interdependent. BPM is a management discipline that integrates strategies and objectives of an organization with customer expectations and needs, by focusing on processes, end to end. This methodology encompasses strategies, objectives, culture, organizational structures, roles, policies, methods and technologies to analyze, design, implement, manage performance, process and establish governance processes (ABPMP, 2013). A process model is a visual representation of the sequential flow and logic control of a set of activities or related actions. The process modeling is used to obtain a graphical representation of a current and/or future process within an organization (IIBA, 2009). The business analysis is a precondition for information systems design and development, in order that it can reduce the amount of problems of misunderstanding between the business areas and the IT team. The Business Analysis Body of Knowledge (BABoK) presents many techniques to understand business needs, business requirements elicitation and business process modeling (IIBA, 2009). There are several languages notation for modeling process: BPMN, Flowchart, EPC (Event-driven Process Chain), UML (Unified Modeling Language), IDEF (Integrated Definition Language) and VSM (Value Stream Mapping). BPMN is a standard diagramming business process created by the BPMI (Business Process Management Initiative), which was later incorporated into the OMG, a group that sets standards for information systems. This notation presents a set of symbols for modeling BPD (Business Process Diagram) of different aspects of business processes. As in most of ratings, the symbols describe clearly defined relationships, such as activity flow and order of precedence. BPEL (Business Process Execution Language) is a workflow-oriented composition model that brings a central piece in the heavily modularized SOC (Service-Oriented Computing) model (Khoshkbarforoushha et al., 2014). The origins of BPEL come from the WSFL (Web Services Flow Language) and XLANG, IBM and Microsoft respectively. It is serialized into XML and points to a schedule following the programming approach on a large scale. BPMN has been used lately as it offers advantages over other forms of business process modeling. Modeling is the set of activities involved in creating representations of business processes. The purpose of modeling is to create a representation of the complete way process and needs of its operation (ABPMP, 2013). BPMN is a language that facilitates communication between the organization and its stakeholders. can be used for modeling, from simple processes such as in complex processes for any type of organization. For García-Dominguez et al. (2012) BPMN must be used for approval of detailed designs. It should be used in repetitive and process with little variation as well as the activity description. In their study, they concluded that the layout cannot model existing objects and their transitions. However, when an agility and iteration in the process design is required, suggest the use of Value Stream Mapping (VSM). This tool was designed to identify problems and also to improve to create plans for waste reduction. Giuseppe et al. (2014) studied a way to manage the risk of deviations of the "Conformance risk aware design" process. Not always what was expected or planned corresponds to the reality. A Business Analyst should mediate this problem. METHODOLOGY The research method uses a case study approach combined with Business Analysis techniques from BABoK to model the business process for information system design. This paper aims to show the complexity of the bidding process in public service to purchase goods and services. We use through business process modeling technique by BPMN notation. Through knowledge of the process it can be reconfigured to optimize human, material and financial resources. This will calculate the cost of idleness and implementation of TDABC. The case study approach, defined by Yin (2014) as a research strategy that seeks to examine, in deep, a phenomenon within its context, to identify the business needs related to the costing process. We collected a data set due to observation, interviews with employees and analysis of internal documents of the organization to understand the process context. The documental analysis was based on material provided by the institution, through management reports, institutional development plan, as well as information available on the Federal Institute of Education, Science and Technology of São Paulo (IFSP) website. It was possible to expose in details the current flow mapped process. However, research in loco revealed some changes in the bidding flow of the studied institution. To model the organizational processes of the IFSP we used the Yaoquiang BPMN Editor 4.0 software, as it was more appropriate, as it performs a real-time consistency check during the BPMN 2.0 modeling (Geiger et al. 2013). RESULTS Two cases were mapped in this article that will be part of the costing model to be developed for the IFSP: the requisition of materials and the process of acquisition of fixed assets and inputs. The first, requisition of materials, shown in Figure 2, starts the need for inputs to the institution activities. For the activities carried out in laboratories, library, classrooms and office specific cost centers will be created because they generate quantifiable services. As for the support activities can be created a single cost center, which will be discussed at future meetings. The International Federation of Accountants defines inventory accounting through the International Public Sector Accounting Standards Board No. 12. This standard addresses Figure 2. Procedure material requisition the recognition or not of the expenses involved in the acquisition and processing costs, the result of the entities. In the system under development will be considered as cost materials consumed. All Brazilian Public Administration entities are required by law to conduct bidding. This requirement comes from the Federal Law No. 8666, known as the law of tenders and contracts and Law No. 10,520, which introduced another form of bidding, the trading session. To carry out any spending in the public sector is still necessary, the approval by the Legislature, materialized in an annual budget law. It is estimated revenues and fixed expenses for the following year. A simple process of purchasing goods and services demand activities of eight different sectors in this organization. A bureaucratic process is required to make the payment to the supplier or service provider, extinguishing the financial obligation. BPMN is helpful, because it showed versatility to model the different situations of the acquisition of goods and services. These model applications generate BPMS (Business Process Management Systems). They has allowed a proper programming of the costing system. It was analyzed the convenience and the opportunity of purchasing a physical or a service, introducing yourself a bidding process, following the steps of the Figure 3 diagram: The process of acquisition of goods and services was modeled. From this point on , it was used at least seventeen employees. Each acquisition has a handheld unit cost of direct work of two hundred thirty-nine dollars in 2014. Table 1 shows the costs of each activity and the time spent, we ask in interviews, and salary queries: Due to the bureaucratic process, we make the payment to the supplier or service provider, extinguishing the financial obligation. CONCLUSION For both public and private HEI, resources are necessary to acquire efficiency. Therefore, they need to know their costs. The article has shown the importance of understanding the business processes for building a cost model for public universities. Therefore, we conducted a case study on the IFSP, where BPMN methodology was conducted to model their processes. TDABC is presented as a new form of funding, and more simplified applications in relation to ABC. Currently, we studied the reduction of waste, either of resources or time. In the analyzed case, a simple process of acquisition materials and services demand activities of eight different sectors. These factors are mainly due to internal rules and legal regulations. To know details of each of the processes of any organization is necessary to map them. Therefore, BPMN is a notation easy to understand. The contribution of this study is to present a business process modeling to optimize time and resources applied to public services. This research has the limiting factor some variations in bidding flow of the studied institution. As a future work it will be continued in the definition of the systems requirements to perform the development of a costing software for HEI.
2018-12-21T11:58:48.117Z
2016-09-29T00:00:00.000
{ "year": 2016, "sha1": "854b85b49c6b4c07f3b1d543be6ada23777217fb", "oa_license": "CCBY", "oa_url": "https://bjopm.emnuvens.com.br/bjopm/article/download/V13N3A2/BJOPMV13N3A2", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "854b85b49c6b4c07f3b1d543be6ada23777217fb", "s2fieldsofstudy": [ "Business", "Computer Science" ], "extfieldsofstudy": [ "Business" ] }
221005754
pes2o/s2orc
v3-fos-license
Integrability and braided tensor categories Many integrable statistical mechanical models possess a fractional-spin conserved current. Such currents have been constructed by utilising quantum-group algebras and ideas from"discrete holomorphicity". I find them naturally and much more generally using a braided tensor category, a topological structure arising in knot invariants, anyons and conformal field theory. I derive a simple constraint on the Boltzmann weights admitting a conserved current, generalising one found using quantum-group algebras. The resulting trigonometric weights are typically those of a critical integrable lattice model, so the method here gives a linear way of"Baxterising", i.e. building a solution of the Yang-Baxter equation out of topological data. It also illuminates why many models do not admit a solution. I discuss many examples in geometric and local models, including (perhaps) a new solution. Introduction The connection of lattice statistical mechanics to topological invariants has long been known, but perhaps not as well appreciated as it ought to be. Classic work of the '70s showed how the generators of the transfer matrix satisfied an algebra also having a graphical presentation. The canonical example is that of Temperley and Lieb, who related the Potts and six-vertex models by utilising an algebra bearing their name [1]. Simultaneously, a geometric expansion for the Potts model was developed where the Boltzmann weight of each term depends on the number of loops in it [2]. The two approaches were unified by showing that generators of the Temperley-Lieb algebra also can be written in terms of loops, where the weight per loop is built into the algebra [3]. A host of other lattice models subsequently were written in the same fashion [4,5]. The algebras themselves were also generalised, most notably to the Birman-Murkami-Wenzl algebra [6,7], providing the analogous setting for more complicated lattice height models already known [8]. A beautiful manifestation of this connection is in knot and link invariants [9]. The Temperley-Lieb algebra underlies the construction of the Jones polynomial, with the loops resulting from resolving the crossings resulting from projecting the loops into two dimensions [10]. The mathematical structure needed in general is a braided tensor category. These categories arose when studying the braiding and fusing of operators in rational conformal field theory [11], and now are widely used in physics in the study of anyons [12,13]. Such categories give not only braid-group representations, but generalise algebras such as Temperley-Lieb to a larger set of rules that give linear relations between the isotopy invariants of labelled graphs. Defining lattice models in terms of a category makes possible finding large classes of topological defects [14]. The connection between statistical mechanics and topology deepens when considering integrable lattice models. Boltzmann weights in an integrable model in two dimensions satisfy a trilinear relation, the Yang-Baxter equation. An essential ingredient is expressing its solutions in terms of a "spectral 2 Lattice models from categories The lattice models at the heart of integrability can be defined in terms of a fusion category. Fusion categories provide a method of defining and computing topological invariants of labelled trivalent graphs on the plane. In this section I describe key facts about fusion categories, and how to define integrable geometric lattice models in terms of them. Geometric models have non-local Boltzmann weights, but provide the most transparent way of starting the analysis. Moreover, the category structure makes the translation of these results to locally defined height models straightforward. Evaluating graphs using fusion categories Here I summarize the key background needed to build the currents. Much more detailed reviews of tensor categories can be found in [11,34,12,35]. The core of a fusion category is a finite set of simple objects. Objects form a vector space, with simple objects a basis. Examples of simple objects include the primary fields in a rational conformal field theory and anyon types in a 2+1d system with topological order. The fusion algebra governs the tensor product of simple objects a and b: with the non-negative integers N c ab describing the sum over simple objects c. The irreducible representations of a semi-simple Lie algebra obey a fusion algebra, but since there are an infinite number of them, do not make up a fusion category. However, deforming the Lie algebra to a quantum-group algebra can truncate the allowed representations to yield simple objects obeying (1). A fusion category C gives a method for associating an isotopy invariant to labelled planar trivalent graph called a fusion diagram F. Each edge is labelled by a simple object in the category, and the label on each edge touching a trivalent vertex must have N c ab = 0. The invariant associated to each F is called the evaluation, and denoted eval C [F]. Invariance under isotopy means that the evaluation remains the same under any continuous deformation of the fusion diagram preserving the labels. All fusion categories have an identity object 0, obeying a ⊗ 0 = 0 ⊗ a = a for all a. Labelling an edge by 0 is equivalent to omitting that edge. For simplicity I mainly discuss only self-dual objects, which have the identity 0 ∈ a ⊗ a. Self-duality means that one does not need to include arrows on the edges of the diagram. In addition, I also make the simplifying assumptions described in sec. 2 of [14], for example considering only N c ab = N c ba = 0, 1. Two sets of linear identities allow fusion diagrams to be evaluated. F moves relate graphs as where the coefficients are called F symbols. This identity relates two graphs by replacing the subgraph on the left with that on the right, leaving the rest of the diagram unchanged. Since the labels of the external lines a, b, r, s in (2) do not change in the F move, one can think of the symbols as matrices acting on the internal labels t and t . The one additional ingredient needed to evaluate a planar fusion diagram is "bubble removal", which relates diagrams as The number d a associated with each simple object is called the quantum dimension, and like the F symbols is specified by the category. Namely, each d a is the largest eigenvalue of the matrixN a that has entries (N a ) c b = N c ab . Thus d 0 = 1, and more generally The nicest way of understanding the meaning of the quantum dimension comes from setting a = b = 0 in (3), showing that evaluating a single closed loop labeled by a gives d a . Setting a = 0 but having b = 0 means that "tadpoles" have vanishing evaluation. Given (2) and (3), one can evaluate any fusion graph simply by using F moves to make a bubble, removing it, and then repeating. The beauty of the category is that the evaluation is independent of which order the F moves are done. Eventually, this process yields a sum over loop configurations, evaluated simply with a weight d a per loop of type a. For example, a triangle can be removed by doing an F move and then bubble removal: For the linear relations (2) to be consistent, the F symbols must satisfy a variety of constraints, most famously the pentagon identity. Simpler ones follow by using different F moves to simplify (5): Other identities arise when when one of the labels is 0: The former follows simply by omitting the external line labeled 0, while the latter arises by doing an F move on (3) and noting that tadpoles vanish. A useful consequence of the latter identify in (7) is that two adjacent strands can be joined via Geometric models The degrees of freedom in geometric models are expressed in terms of fusion diagrams. It thus seems most natural to define the models on a honeycomb lattice, with each edge labelled by some object in the category so that each trivalent vertex can be related to the fusion of these objects. However, integrable models are best dealt with on the square lattice. Thus I start out with the latter on the plane, with periodic boundary conditions in the horizontal direction and open in the vertical. The first step in defining a geometric model is labelling by some object ρ ∈ C each edge of the square lattice. The next step is to crack open each vertex into two trivalent vertices connected by a horizontal line segment as r r r The additional horizontal line is labelled by an object χ ∈ ρ ⊗ ρ. The amplitudes A χ are complex numbers not determined by the category data. The explicit quantum dimensions included look unwieldy here, but turn out to be a useful convention. The cracking open turns the square lattice into a brick lattice, topologically equivalent to the honeycomb lattice. A completely packed geometric model corresponds to taking ρ to be a simple object. The degrees of freedom χ v then live only on the vertices v of the square lattice (the edges of the brick lattice that do not belong to the square lattice). Each labelling {χ v } defines a planar fusion diagram F on the brick lattice, and the partition function is equivalently the sum over all allowed F. The Boltzmann weight for each F has both local and non-local parts. The non-local part is simply the evaluation of corresponding fusion diagram, and so depends only on its topology. The local part is expressed in terms of amplitudes A χv , each of which depends only on the label χ v on the corresponding edge in F. With these definitions the partition function of a completely packed geometric model is A picture for the partition function can be written using (9) as Unlabelled solid lines in this paper are always ρ lines. More general geometric models come by relaxing the requirement that ρ be simple, so that ρ = ⊕ j ρ j for simple ρ j . In such models the degrees of freedom live on the edges as well. The simplest and most famous examples of geometric models are loop models, where the fusion diagrams can be rewritten in terms of self-avoiding loops. Loop models arise from categories when there are only two channels χ = 0, 1 allowed on each face, i.e. ρ ⊗ ρ = 0 + 1. Using (9) gives configurations including trivalent vertices with labels ρ, ρ, 1. However, each such fusion diagram can be turned into a sum over loops by exploiting the F moves. The needed relation comes from (7): where (4) requires d 1 = d 2 ρ − 1. Any lines labeled by 1 can be replaced by are a linear combination of the two self-avoidances, and so (9) can be recast as r r r r = A 1 r r Using (13) at each vertex turns a sum over fusion diagrams into a sum over completely packed self-avoiding loops L. The local weights for each loop configuration are determined by the coefficients in (13). Moreover, since this rewriting uses an F move, it does not change the evaluation. Evaluation is now easy to do, as each closed loop yields a weight d ρ per loop, giving the partition function where n L is the number of loops and n 0 , n0 count each type of avoidance in L. The model is often called an O(N ) loop model for historic reasons, with N the weight per loop. In the corresponding random-cluster model, the loops surround clusters of weight Q = d 2 ρ , which for integer Q, it can be mapped onto the (local) Q-state Potts model [2,15]. Although loops and (14) look much simpler than fusion diagrams and (10), the identities stemming from the category make the latter formulation a much better setting for introducing and analysing the currents. Height models By construction, the Boltzmann weights of the geometric model are non-local. A remarkable fact is that for any geometric model built on a fusion category, there exists a local model with a related partition function. Simple examples came from [1], where the generators of the transfer matrix were shown to satisfy the Temperley-Lieb algebra. Local and non-local models give rise to different representations, but the algebra provides relations between the ensuing partition functions. Such algebraic results were generalised to the full category setting, under the name of shadow world [36,37,38,39]. The local models go under a variety of names, with RSOS (restricted-solid-on-solid), IRF (interactions round a face), and anyon chains among them. I call them height models. The heights are objects in the category C living on the dual square lattice, the faces of (11). They satisfy adjacency rules dictated by fusion rules coming from ρ, the object used to define the geometric model. These rules are conveniently displayed in fusion trees, fusion diagrams of the form r r h r r r r r r The L vertical lines are all labeled with ρ, and I call them strands. The heights h j with j = 0, . . . L are the horizontal lines, and by the fusion rules must satisfy h j±1 ∈ ρ ⊗ h j . For ρ simple, this condition translates to N h j±1 ρh j > 0. Each allowed labelling of such a fusion tree corresponds to a height configuration for one row of the lattice. For open boundary conditions, there are L + 1 heights, while with periodic boundary conditions h 0 = h L , giving L of them. Operators in height models act on a vector space V whose basis elements are all the allowed fusion trees for a given ρ and L. Operators are then defined using a fusion diagram with some number of ρ strands at the bottom and the same number at the top; see e.g. (16) for a two-strand operator. Such an operator acts on V by gluing it to the tree somewhere. Although perhaps gluing initially yields a more complicated fusion diagram, as long as it does not wrap around a cycle it can always be reduced to a fusion tree (15) by doing F moves and bubble removal. The rules of the category guarantee that the results are independent of how this reduction is done. The Boltzmann weights are built from two-strand projection operators If ρ is not simple, one defines a set of projection operators labeled by the simple objects on the strands. A set of operators P (χ) j acting on V is then defined by gluing the two bottom strands in (16) to the two strands surrounding h j . Using F moves to simplify the resulting picture into a sum over trees of the form (15) gives then P hj -1 h' j hj+1 (17) using (5) and (2). The matrix elements of each P (χ) j in the basis (15) are thus One remarkable feature of the category setup is that the conserved currents in the height models can be defined and analysed without ever needing the explicit expressions (18). Relations among operators are derived by gluing them together and manipulating using the relations between fusion diagrams. For example, proving that the P Setting a = b = ρ in (8) shows they sum to the identity operator: In a particular category for particular choices of ρ, relations between projectors at different j, can then be derived, obtaining algebras such those of Temperley-Lieb [1] or the Birman-Wenzl-Murakami [6,7]. Comparing (16) to (9) shows that the cracking-open process used to define the geometric models amounts to a sum over projection operators. Related height models are then defined via for the same amplitudes A χ (note the explicit quantum dimensions in (9) normalize the amplitudes to multiply a projector). If ρ is not simple, then the appropriate edge labels ν j must be included in the amplitudes. Acting with this R operator can be thought of as adding a vertex to the square lattice. For periodic boundary conditions, the basis elements of V are labeled by all the allowed height configurations with h 0 = h L . The transfer matrix acting on this V is then This T acts at a 45-degree angle to the square lattice, i.e. across the diagonals of the squares. With appropriate choices of boundary conditions, the partition functions of the geometric models and the height models can then be related, as follows from the shadow-world construction [36,37,38,39]. For example, the "restricted solid-on-solid" models of Andrews, Baxter and Forrester [4] are related to the completely packed loop models in this fashion. The quantum spin chains found by taking the Hamiltonian limit of the transfer matrix have been studied in the guise of "anyon chains" [40]. Other connections of local models to associated categories are described in detail in [14]. The Yang-Baxter equation The purpose of this paper is to find linear equations for Boltzmann weights that "Baxterise" [19] the braided tensor category, i.e. give solutions of the much-more complicated Yang-Baxter equation. The latter is never needed for the analysis, but I give it here both for the sake of completeness and to provide some intuition into how to parametrise its solutions. The Yang-Baxter equation (YBE) is trilinear in the Boltzmann weights. A one-parameter family of commuting transfer matrices can be constructed using its solutions. This parameter is typically called the spectral parameter, as the eigenvalues of the transfer matrix depend on it even though the eigenvectors do not. Many of the most profound results of integrability come from analysing how physical quantities depend on the spectral parameter [15]. Mathematical ones do too: as reviewed in section 3.1, braid-group generators often can be found by taking an extreme limit. In a geometric model, only the local part of the Boltzmann weights depend on the spectral parameter u, and the evaluation of any individual fusion diagram is not affected by its presence. I thus sometimes write the amplitudes as A χ (u), but these amplitudes may very well depend on other parameters. The three Boltzmann weights in each term of the YBE have distinct spectral parameter, but the YBE requires a relation between them, and so is a two-parameter equation. Labelling each Boltzmann weight in (9) by the corresponding spectral parameter, the pictorial version of the YBE is u u' u+u' = u' u u+u' (22) The YBE applies to the height models when the Boltzmann weights are treated as a two-strand operator defined in (20), giving a three-strand relation The relation between the parameters given in (22 23) is of "difference form", as all the solutions discussed in this paper have this property. This form can be generalised and solutions found [41], but I defer the discussion of the related conserved currents to the future. One nice feature of the difference form is that each spectral parameter in each Boltzmann weight can be interpreted as the (bottom) angle between those two lines: The relation between the three spectral parameters in (22 23) is required for the picture to lie in the plane. Solving the YBE even in simple cases requires work. One plugs in the expansion in (9), and then can deform, use F moves and bubble removal to relate different fusion diagrams for the three strands. The equality in (22) must then hold for each of a linearly independent set of fusion diagrams, (over) constraining the amplitudes. Since the corresponding operators R j from (20) in the height models are written in terms of the same projectors, the same amplitudes give a solution to (23) as well. The completely packed loop model provides a nice illustration of the techniques. It is easiest to use the loop basis for the Boltzmann weights (13), as the set of linearly independent fusion diagrams is more apparent. These diagrams are After plugging in (13) and simplifying, the YBE (22) gives a relation for the coefficients of each of the fusion diagrams in (24). Namely, letting C = (A 0 − A 1 )/d ρ , the first diagram yields while the second and third each give These three are all automatically satisfied. The relation for the fourth picture is with the same equality for the fifth. The factor d ρ arises by removing a closed loop. Demanding the Boltzmann weights in the completely packed loop model satisfy the YBE thus yields a nasty-looking non-linear functional equation (25) for the amplitude ratio. The solution, however, is remarkably simple. Defining the parameter q via d ρ = q + q −1 gives Conserved currents Defining the lattice models in terms of category data as in section 2 leads to a variety of useful applications. In particular, topological defects can be constructed in any lattice model built from a fusion category [14]. Deforming the path of a topological defect leaves the corresponding partition function invariant. Even more remarkably, such defects are defined so that they branch and fuse in a topologically invariant fashion, leaving the partition function invariant under the F moves of the category. Exploiting these properties allows certain universal quantities to be computed directly and exactly on the lattice [42,14]. The current J(z) is defined by terminating a topological defect in a non-topological fashion at a location z. Correlation functions of such operators then will depend on z. The current is non-local because of the topological defect emanating from it, but in a very gentle way, as the path can be deformed without changing the correlator. A conserved current here satisfies a lattice version of a divergence-free condition, originally introduced in models with quantum-group symmetries [29]. This condition was reintroduced and rebranded as "discrete holomorphicity" [30], but as I will explain, calling it a conserved-current relation is more appropriate. In this section I define the conserved currents in terms of a braided tensor category. This type of category contains more data than the fusion category used to define the lattice models studied here and the topological defects studied in [42,14]. The construction here is simpler, but less general. The payoff is that not only does a braided tensor category give a natural definition for the currents, but using its rules gives a simple method for finding Boltzmann weights where they are conserved. Braiding and categories While the lattice models themselves can be defined using a fusion category, the simplest way to construct conserved currents is to include some additional structure, braiding. One nice way to think about braiding is to imagine a knot or link in three dimensions, and then projecting it onto the plane. The projection results in overcrossing and undercrossings, drawn respectively as The two are oriented with respect to the square lattice on which the lattice models are defined. The braiding must satisfy various consistency relations, known as Reidemeister moves, to ensure that the resulting topological invariant is independent of projection. Two of them ensure that the crossings in (27) are generators of the braid group [43]. The group generators B where for simplicity all lines are labeled by the same object ρ and the superscripts omitted. The resemblance of (28) to (22) is obvious, and often it is referred to as the Yang-Baxter equation in the category literature. However, this name is fairly misleading. Not only does analysis of the braid group long predate both Yang and Baxter 1 , but it does not include the all-important dependence on the spectral parameters apparent in (22). The resemblance does make it fairly obvious how to obtain representations of the braid group from solutions to the Yang-Baxter equation, as all the arguments in the latter are the same for u = u = 0, and |u| → ∞, |u | → ∞ if this limit exists. In the examples studied in this paper, I adopt conventions that give For example, for the completely packed loop model the braid generators are with a choice of overall phase. Neither the braid group nor a fusion category alone is sufficient to compute a knot invariant: the two must be combined into a braided tensor category. Useful reviews for physicists can be found in [11,12,13]. The additional data needed is in the relation The twist factors Ω bc a are roots of unity in a braided tensor category. They can be written in the form Ω bc a = ν bc a e iπ(∆ b +∆c−∆a) (32) where the rational number ∆ a is called the topological spin of simple object a. The other coefficient ν bc a = ±1 is an annoying sign, with the special case ν bb 0 known as the Frobenius-Schur indicator. No more data than that in (31) need be added to the fusion category data, as by using (8) the braid can be written as The undercrossing is given by the same relation with all Ω r → Ω −1 r . The expression (33) of the braid as a sum over fusion channels is called a skein relation. When the braided tensor category is built from a quantum-group algebra, the coefficients are related to those of the universal R matrix [21,36,22,23]. The explicit matrix elements of braid generators acting on a fusion tree then can be computed explicitly by combining (33) with a calculation virtually identical to that leading to (18). Combining braiding and fusing allows one to compute topological invariants for knots and links. Lines out of the plane of a fusion diagram can be deformed using the "hexagon equation" A useful identity for twist factors comes from setting r = c and gluing together the corresponding lines in (34), and then undoing the resulting twists: The latter relation comes from using (32). Using (33) allows a knot or link to be reduced to a fusion diagram. The evaluation using the category gives a topological invariant, up to one subtlety. The first Reidemeister move is Setting c = b = ρ and a = 0 in (31) yields this relation up to the phase, which therefore must be cancelled out in a topological invariant like the Jones polynomial. The easiest method is to "frame" the knot by treating it as a ribbon, so that (36) corresponds to a 2π twist of the ribbon (proof: try it with a belt). The (signed) number of these twists is called the writhe w, and each fusion diagram must be multiplied by the corresponding (Ω ρρ 0 ) −w to obtain a topological invariant. While keeping track of the writhe is a pain in the calculation of a knot invariant, the phases in (31) are an essential part of constructing a conserved current. The skein relations for completely packed loops (30) are found from (33) by supplying the appropriate twist factors. One can use the category su(2) k arising from the quantum-group algebra U q (sl 2 ), which in turn is constructed from a deformation of the Lie algebra sl 2 . The objects are labeled by their "spin" 0, 1 2 , 1, . . . , k 2 , and the fusion algebra is a truncated version of that of corresponding representations of sl (2). It can be found in section 2.1 of [14]. For any k > 1, setting ρ = 1 2 yields the fusion ρ ⊗ ρ = 0 ⊕ 1 needed to construct loop models. The quantum dimensions are so in particular d 1 Here (see the discussion in section 5) Using (33) and (12) indeed yields (30). Another braided category A k+1 with the same objects and fusion rules as su(2) k arises from the Φ 1,s fields in the minimal models of conformal field theory [45,46]. The corresponding twist factors come from giving (30) with q ↔ q −1 (or equivalently exchanging B and B). Defining the currents I now turn to the central topic of this paper, defining and finding conserved non-local currents. The canonical example of such a current is the fermion operator in the critical Ising quantum spin chain. It is non-local in the sense that when acting on the spin Hilbert space, the fermion operator at site j flips all the spins with j < j. This flipping is a very special sort of non-locality in that it commutes with all the Hamiltonian generators except those acting non-trivially at site j. An elementary computation shows that such a fermion operator obeys a conservation law. Such currents are very naturally and generally defined using braided tensor categories, both in geometric and height models. The partition function is written as an expansion over fusion diagrams as described in (10), and expectation values of the current operators are computed by modifying the weights in each term of the sum. One very nice feature of this setup is this modification is done only to the fusion diagram, and hence only affects the topological part of each weight. The current-current expectation value J(w) J(z) is defined by modifying each fusion diagram to include another strand with label φ = 0 and terminating it at two edges w and x of the square lattice. In the completely packed model, the simple object φ = 0 must obey ρ ∈ φ ⊗ ρ for the termination to be possible. The ensuing trivalent vertex for J(z) is defined so that the φ-strand is pointing upward or leftward, with J down or to the right: The φ strand is drawn dashed solely as a visual aid -it is treated as any other strand. Any time the φ-strand meets a ρ line, the ensuing intersection is defined to be an overcrossing. The resulting diagram F w,z involves both fusion and braiding. Then for any completely packed geometric model In a picture, using as always (9) to obtain the sum over trivalent vertices. The relation (3) requires that this correlator is non-vanishing only for F that connect z and w. When ρ is not simple, one may consider more complicated currents J (φ) ab (z) that change the label on an edge. Because of (34), the path of the φ-strand away from the vertices can be deformed without changing the evaluation. The currents therefore are very gently non-local, as the φ-strand away from the vertices is a lattice topological defect. As discussed in depth in [42,14], topological defects can be found in any (not necessarily integrable or critical) two-dimensional classical lattice model built using a fusion category. These defects can branch, and this structure can be utilised to define multipoint correlators of the currents. Although such expectation values remain independent of local deformations of the paths, they will depend on how the various φ-strands pass over each other or fuse together. Understanding such multipoint correlators likely would be interesting, but I will discuss them no further here. The currents are defined in height models simply by gluing the appropriate vertices to the corresponding ρ-legs of fusion tree, with the φ-strand braiding appropriately. The action on the tree can be worked out using F moves and bubble removal as done for the projectors in (17). The conserved-current relation A braided tensor category provides a natural way not only to define currents, but find ones that are conserved. Current conservation in essence amounts to a vanishing lattice divergence at every vertex where the ρ lines meet. In a picture, In an equation, where z is written in Cartesian coordinates (x, y) and the lattice spacing is 2 so that edges have x + y even. This operator equation means that the corresponding sum over two-point functions vanishes for any fixed w. The coefficient µ is a complex number, and can be thought of as a rescaling of the coordinates in the interpretation of (44) as a vanishing divergence. The corresponding charge is simply Q = L n=1 J(x + 2n, y), and is conserved (i.e. independent of y) up to boundary terms. The conservation law (44) for fractional-spin currents in lattice models appeared long ago, going back at least to Bernard and Felder [29] in 1991. 2 They showed how quantum-group algebras give a method for defining the vertices and finding solutions to the relation. Their work does not seem to have been widely noticed, perhaps because of the work involved in doing explicit calculations using the representation theory of quantum-group algebras. The same relation was reintroduced in the interest of finding "discretely holomorphic" operators [30,47]. An operator of spin s in conformal field theory picks up a phase e isθ under rotation by some angle θ, just as the operator J picks up e iπ∆ φ under twists by π. If the lattice model has a continuum limit, the spin of the CFT operator corresponding to J should be ±∆ φ mod 1, an observation confirmed numerically in many examples. The idea of [30] goes further, making the observation that if (44) is written as J 1 +µJ 2 −J 3 −µJ 4 = 0 and µ is chosen appropriately, it amounts to a lattice analog of the vanishing of a contour integral, the lattice Cauchy-Riemann equation around one vertex. One then might hope that such currents become holomorphic in the continuum limit, enabling a rigorous demonstration of conformal invariance emerging from a lattice model [30,47,48]. This approach works spectacularly for a few cases like Ising where the continuum field theory is free [49], but otherwise has not borne fruit. The reason presumably is that conserved-current condition (44) alone is insufficient to prove any form of holomorphicity, because it provides only half the constraints needed: with twice as many edges as vertices on the square lattice, there are twice as many degrees of freedom as constraints. Indeed a counterexample occurs in the quantum three-state Potts chain. The lattice parafermion operator defined in [50] satisfies (44) [51] (as redone below in section 5.3). However, in the scaling limit the lattice operator becomes the linear combination of two operators with the same s mod 1, only one of which is holomorphic [52]. Nevertheless, one learns a great deal from (44) by taking a different point of view. A braided tensor category gives a natural definition of a current and the topological part of the Boltzmann weights, but it does not fix the amplitudes A χ (u) determining the local part. As only one Boltzmann weight appears in each term in (44), demanding current conservation gives a linear equation for these amplitudes. Solving it is a fairly straightforward exercise in loop models (completely packed and otherwise) and a few simple local models. Cardy and others indeed found many more solutions by direct calculation [51,53,31,54,55,56,57,58,59,60,61,49]. More importantly, he made the very interesting observation that in all cases, the Boltzmann weights also satisfy the Yang-Baxter equation (22) [31]. As the latter is trilinear, much more brute force needed to solve it, and so utilising (44) is much more efficient, not to mention elegant. Boltzmann weights from conserved currents I show here how to use a braided tensor category to find a wide class of solutions to the conservedcurrent relation (44). The Boltzmann weights are given in terms of the category data and µ, the latter turning into the spectral parameter. The method works for local and non-local models alike. In the completely packed case, the answer is rather simple, and in many special cases reduces to one obtained using quantum-group algebras. In these cases, as well as all others I have checked, the Boltzmann weights give a trigonometric solution of the Yang-Baxter equation. Solving the conserved-current relation therefore seems to give a linear method of Baxterisation. Except for µ, the quantities in the conserved-current relation are all defined via a braided tensor category. Manipulating the diagrams using F moves and twisting does not change the evaluations, and so leaves the correlators invariant. I explain here how such elementary manipulations can be used to put all four diagrams in (44) in a common form, making finding solutions easy. The common form is found by writing each vertex as a sum over fusion channels using (9), and then using F moves and twisting to move the φ strand into the centre to fuse with χ. A diagram requiring only an F move is where as always the dashed line is labelled φ and unlabelled solid lines ρ. Another simple case is The other two require several twists, yielding (47) and via a similar sequence of moves These four relations put all the terms in (44) into a common form, with two lines fusing in the middle to give φ. Summing over fusion channels from (9) as well as those coming from the F move means that for each such a, b obeying N b aφ = N a bφ = 0, Happily, the identity (6) means the F symbols and quantum dimensions cancel, leaving when as always a, b ∈ ρ ⊗ ρ, and ρ ∈ φ ⊗ ρ. When all objects are self-dual, the latter condition implies φ ∈ ρ ⊗ ρ as well. The elegant relation (50) arose long ago in models defined using quantum-group algebras and with φ a particular representation (the adjoint in the untwisted case) [26,27,28]. Here it gives solutions of Jimbo's equation [25] for Baxterising a representation of a quantum-group algebra. The relations (44,50) use only the data coming from the braided tensor category, so the approach here provides not only a generalization, but a shortcut to the result. The advantage of the quantum-group approach, however, is that weights satisfying Jimbo's equation satisfy the Yang-Baxter equation, whereas I have only proved current conservation (44). Nevertheless, it is natural to expect (50) implies (22) well outside quantum-group algebras, as requiring current conservation already highly constrains the theory. Indeed, an analogous argument in Lorentz-invariant field theory states that having even one additional conservation law not commuting with the Poincare algebra is sufficient to make the model integrable [32,33]. Boltzmann weights with conserved currents The constraint (50) is the central result of this paper. Boltzmann weights satisfying it admit a conserved current defined via (41 ,44) in the completely packed model (ρ simple). The simplicity of (50) is rather striking. It depends only on the twist factors and the fusion algebra, with the current type φ only entering the latter. No conditions are placed on µ. Here I describe how to find categories and objects for which there is a solution. I find many examples where it works and explain how to see when it does not. Although most of the weights found here are known to satisfy the Yang-Baxter equation, one series seems not to have arisen previously. Moreover, only a few of the conserved currents derived in this section have been analysed before. A few general considerations Explicit formulas for category data. The twist-factor ratio needed can be rewritten as where the first equality comes from using (35) and the second from (32). An explicit expression for Ω b ρρ in any modular tensor category can be found in proposition 2.3 in [62]. A modular tensor category extends the braided tensor category to allow for fusion diagrams on surfaces, including data for modular transformations. The explicit expression involves these data, the modular S matrix. Simpler expressions exist for any category g k built from a quantum-group algebra U q (g) [24]. The topological spin is proportional to the quadratic Casimir C g of the corresponding representation of the (undeformed) Lie algebra g: The level k is a positive integer, while h g is the dual Coexter number of g (the quadratic Casimir of the adjoint representation). The sign ν bc a = ±1 is determined by the symmetry (+1) or antisymmetry (−1) of the invariant tensor coupling the representations b and c into a. It is worth noting that much useful information about simple Lie algebras such as tensor products and quadratic Casimirs may be accessed using the Mathematica package LieART [63]. Tensor-product graphs A solution of (50) does not automatically exist for a given ρ ∈ φ ⊗ ρ. For r objects in ρ ⊗ ρ and 0 ∈ ρ ⊗ ρ, there are up to r(r − 1)/2 possible distinct equations in (50) but only r − 1 independent amplitude ratios. Of course, some N b aφ = 0 may be zero, information that can be summarised conveniently in a tensor product graph [26]. The vertices of this graph are labeled by the objects in ρ ⊗ ρ, and two vertices a, b share an edge if N b aφ = 0, so that each edge corresponds to one relation. For example, when ρ ⊗ ρ = 0 ⊕ 1 as for completely packed loops, the only possible non-trivial label for the current is φ = 1, and the tensor-product graph is simply 0 1. When all the objects are self-dual as assumed above, N b aφ = N a bφ , so the edges of the graph do not need to be oriented. I give an example below in section 5.3 where this assumption is relaxed. Tree tensor-product graphs Euler's relation ensures that the number of edges of a tree (a graph with no cycles) is one less than the number of vertices. Thus when the tensor-product graph is a tree, the number of constraints coming from (50) Loops and ABF. The data for two braided tensor categories with the fusion rule ρ ⊗ ρ = 0 ⊕ 1 are given in (38) (39). Using (32) to get the twist factors gives immediately These amplitudes result in conserved currents in the completely packed loop models, recovering the results of [51]. The shadow-world construction described in section 2.3 extends the result to the corresponding height models of Andrews, Baxter and Forrester [4]. Even more exciting is the fact that weights satisfy the Yang-Baxter equation, as apparent from comparison with (26) with µ = e u for su(2) k and µ = e −u for A k+1 . One category, two solutions The next-simplest case is when ρ ⊗ ρ is the sum of three objects. The vector representation V of so(n) k or sp(2m) k with n > 2, m > 1 and k ≥ 2 has with A and S the antisymmetric and symmetric representations respectively. In these categories F moves do not allow trivalent vertices to removed as in (12) in the loop model, and the corresponding Birman-Murakami-Wenzl algebra [6,7] generalises Temperley-Lieb to include intersections. Either A or S can be used for φ, with tensor product graphs The signs needed to compute the Boltzmann weights are ν A V V = −1 are ν S V V = 1, as the names of the objects indicate. For so(n), ν 0 V V = 1 and the quadratic Casimirs are C n (A ) = h so(n) = n − 2 and C n (S) = n. The ratios from (50) are then for so(n) k with q = e i π n+k−2 . For sp(2m), ν 0 V V = −1 along with C m (A ) = m and C m (S) = h sp(2m) = m + 1, so A new solution of Yang-Baxter? A nice aspect of the approach here is that distinct solutions for a given model stem arise naturally from different choices of φ. In the quantum-group approach, the solutions come from very different places. The weights in (56,57) where φ is the adjoint representation (A for so(n) and S for sp(2m)) correspond to long-known solutions of the Yang-Baxter equation, found by rewriting the height-model weights of [8] in terms of the projectors P (0) , P (A) and P (S) defined in (18). These are the solutions of Jimbo's equation corresponding to untwisted Kac-Moody algebras [26,27]. The weights for sp(2m) k for φ = A and so(2l + 1) k for φ = S correspond to the solutions of Jimbo's equation for the twisted Kac-Moody algebras A 2m−1 and A 2l respectively [64,28]. (The case l = 1 coming from so(3) k is known as the Izergin-Korepin R-matrix [65].) The solution for so(2l) k with φ = S from (56) seems to have appeared only implicitly in e.g. [66]. Given how naturally it fits in with the others, I conjecture it also satisfies the Yang-Baxter equation. Higher spins A solution for φ = 1 in the su(2) k or A k+1 categories exists for any ρ, generalizing the ρ = 1 2 and 1 results. The tensor-product graph is The Boltzmann weights admitting a conserved current are found easily using (50) with the category data from (38) or (39), giving for a = 0, 1 . . . , min(2s, k − 2s) with q → q −1 for A k+1 . These weights solve Jimbo's equation and hence Yang-Baxter, and were found long ago [67] using the fusion procedure [68]. Using a higher value of φ however leads to a more complicated tensor-product graph without a solution, as I discuss below. Another two-solution case Taking ρ to be a spinor representation in so(n) k leads to another tensor-product graph where all the objects lie in a line. Taking φ = A , the adjoint representation, gives solutions of Yang-Baxter as expected [27]. An interesting feature for odd n (where there is only one spinor representation) is that taking φ = V also leads to a tensor-product graph where all objects lie in a line. As explained in [28], this second solution corresponds to the twisted algebra D (n+1)/2 . Exceptional cases Conserved currents can also be constructed for categories built on exceptional Lie algebras. One nice example comes from the (G 2 ) k category by taking ρ to be the 7-dimensional vector representation (treating G 2 as a subalgebra of so(7) [69]). Its fusion is Taking φ in the adjoint representation gives the tensor-product graph The quadratic Casimirs are C G 2 (V ) = 2, C G 2 (A ) = 4, C G 2 (S) = 14/3, while the signs are as in so (7), namely ν 0 Then (50) gives in agreement with [64]. A similar calculation gives solutions for the fundamental representations of E 7 and F 4 [70]. Tensor-product graphs with cycles When a tensor-product graph contains a cycle, the relations (50) overconstrain the Boltzmann weights. Generically, there is no such solution for a given φ. Moreover, for some ρ there exists no φ yielding a conserved current, and so presumably no solution of the Yang-Baxter equation. However, sometimes the extra constraints can be satisfied, and I discuss a few examples here. Escaping quantum groups with parafermions All the lattice models analysed in section 5.2 can be built from a quantum-group algebra. Here I discuss an example that cannot, the integrable Z M -invariant clock models [71]. These models are built from the Z M Tambara-Yamagami category [72]. It has M + 1 objects, labeled X and a = 0, 1 . . . M − 1, with fusion algebra a ⊗ a = (a + a ) mod M , X ⊗ a = a , Taking ρ = X then gives a height model where half the heights are X, with the other half any value 0, 1, . . . M − 1. The Boltzmann weights thus can be written in terms of the projectors P (a) , which are given in category language in [14]. The topological spins are h a = a(M − a)/M , the dimensions of the parafermion fields in the corresponding conformal field theory [73], while all ν = 1. The current J defined by taking φ = 1 is known as the parafermion operator, found by generalising the Jordan-Wigner transformation [50]. The fusion coefficients needed to define the tensor-product graph are N a+1 a1 = 1 (with indices interpreted mod M ) and zero otherwise. Because the objects a = 1, . . . , M − 1 are not self-dual (a = M − a), N b aφ = N a bφ and the edges of the tensor-product graph need orientation. Putting an arrow pointing from a → a + 1 results in an M -sided oriented polygon, e.g. for M = 6. For J to be conserved [53], (50) relates the amplitudes as where A M ≡ A 0 . The fact that the tensor-product graph is a cycle means that there is one more equation in (50) than there are amplitude ratios, and so one consistency condition. It amounts to checking that (63) for both a = 0 and a = M − 1 indeed give the same ratio A 1 /A 0 . The ratios (63) are precisely those found in [71] to satisfy the Yang-Baxter equation, with the identification µ = e iα/M ω −1 . Thus again demanding a conserved current results in an integrable model. Other solutions with cycles Despite the results for parafermions, a φ whose tensor-product graph has a cycle generally does not yield a conserved current. For example, consider su(2) k or A k+1 , with ρ = 3/2, so that ρ ⊗ ρ = 0 ⊕ 1 ⊕ 2 ⊕ 3 when k ≥ 6. The tensor-product graph for φ = 1 given in (58) is a tree, leading to a conserved current. However, taking φ = 2 gives a graph with a cycle: Using the appropriate data shows the only consistent solution to (50) is if 1 = 6 mod (k + 2), which cannot be satisfied with k ≥ 6. Changing ρ typically makes matters worse, as the more objects in ρ ⊗ ρ, the more difficult it is to avoid cycles in the tensor-product tree and the resulting constraints. Nevertheless, solutions of (50) still exist in a few cases with cycles. A number of examples where φ is the adjoint representation of the quantum-group algebra are described in [26,27]. Here I briefly describe one example not covered there, and apparently discussed in the literature only in the k → ∞ rational limit [74]. The example is sp(2m) k taking ρ = S (the adjoint) and φ = A . The ensuing tensor-product graph is where the other representations are labelled by their highest weights in the standard conventions [63]. The constraint coming from the cycle is satisfied because Ω SS S (Ω SS A ) −1 = Ω SS 2µ 1 +µ 2 (Ω SS 2µ 2 ) −1 . I have verified this fact from the explicit data, but it is possible something deeper ensures its truth. The same sort of tensor-product graph and identity applies for so(n) k with ρ = A and φ = S as well. Conclusions A glib but not meaningless way of summarising integrable lattice models is as "integrability requires adding geometry to topology". Boltzmann weights of trigonometric integrable models involve both topological invariants and local weights. The local information depends on the spectral parameter, the angle between two lines of the lattice in (22) for most solutions of the Yang-Baxter equation [75]. Thinking of the conserved-current relation (44) as a lattice analog of a divergence-free condition gives a natural explanation for why angles and hence geometry appear [31]. Requiring a conserved current exist is much easier way of finding trigonometric Boltzmann weights than solving the Yang-Baxter equation [31]. Moreover, the categorical approach described in this paper provides a simple method both to define the currents and then find the weights that make them conserved. This simplicity of (50) points the way to being able to classify which objects in a category will be able to be Baxterised. In particular, since all the data is known for categories built on quantum-group algebras, it very well may be possible to classify all such integrable models. Just as all known Boltzmann weights admitting a non-trivial conserved current also go on to also solve the Yang-Baxter equation, the converse also may be true. Namely, of all the known unitary trigonometric solutions to Yang-Baxter, I know of none that can not be written in terms of category data with a conserved current. Of course, knowing no counterexamples is hardly the same as knowing the truth, but it is a promising start. Given the simplicity of the construction, with a little patience it should be possible to check many more examples, and very possibly even prove (or disprove) that all unitary trigonometric solutions of the Yang-Baxter equation are of the form (50). I made a number of assumptions to simplify the analysis, but none of them seem particularly crucial. Models where ρ is not simple very possibly can be obtained by reduction from simple objects. For example, the "dilute O(n)" model (where ρ = 0 ⊕ 1 2 in su(2) k or A k+1 language) is related to the A (2) 2 Izergin-Korepin solution described above [76]. Although for the most part I avoided categories with non-self-dual objects, the fact that the clock-model example worked beautifully is a good omen for extending the results to such models. Placing different objects on the horizontal and vertical strands as in [27,28] and including objects where N c ab > 1 also seem to present no major obstacle. Even more intriguingly, the results may be applicable directly to fusion categories. In [14], topological defects were constructed in lattice models built on a fusion category seemingly without recourse to braiding. Such defects come from the Drinfeld centre, a braided tensor category associated with any fusion category, even those without braiding [77]. The construction can be extended to allow these defect lines to be terminated without braiding, and most importantly, with the appropriate behaviour under twists [78]. It seems very possible that the construction in this paper can be extended to cover such cases, for example the Haagerup category and others discussed e.g. in [79]. Even more exciting would be if such conserved currents were then to lead to integrable lattice models. Another very interesting direction to pursue is to understand if the assumptions made here can be relaxed even further to cover various more complicated integrable models such as those built on graded Lie "superalgebras". Typically the associated categories are not unitary and can have an infinite number of simple objects. Nevertheless the quantum-group approach does work [80], boding well for extending the categorical approach as well. One glaring hole remains though. The analysis so far does not provide a way to address elliptic solutions of Yang-Baxter, where the associated lattice models are integrable but not critical. Most and possibly all trigonometric solutions admit at least one elliptic deformation, but the connection to the category is not so obvious. However, recent progress has been made by extending Chern-Simons topological field theory from three spacetime dimensions to four. Elliptic solutions to Yang-Baxter arise with the extra dimension playing the role of the spectral parameter [81]. It would be quite exciting to relate this field-theory approach to the categorical one, even in the trigonometric case.
2020-08-07T01:00:25.414Z
2020-08-05T00:00:00.000
{ "year": 2021, "sha1": "4786da44e17090479dea1f62ee5d15d2e7e1ba91", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2008.02292", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "fafde62e6f6c5695c44dd737f6a25808edafc4cf", "s2fieldsofstudy": [ "Mathematics", "Physics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
267561899
pes2o/s2orc
v3-fos-license
Predatory and Defensive Strategies in Cone Snails Cone snails are carnivorous marine animals that prey on fish (piscivorous), worms (vermivorous), or other mollusks (molluscivorous). They produce a complex venom mostly made of disulfide-rich conotoxins and conopeptides in a compartmentalized venom gland. The pharmacology of cone snail venom has been increasingly investigated over more than half a century. The rising interest in cone snails was initiated by the surprising high human lethality rate caused by the defensive stings of some species. Although a vast amount of information has been uncovered on their venom composition, pharmacological targets, and mode of action of conotoxins, the venom–ecology relationships are still poorly understood for many lineages. This is especially important given the relatively recent discovery that some species can use different venoms to achieve rapid prey capture and efficient deterrence of aggressors. Indeed, via an unknown mechanism, only a selected subset of conotoxins is injected depending on the intended purpose. Some of these remarkable venom variations have been characterized, often using a combination of mass spectrometry and transcriptomic methods. In this review, we present the current knowledge on such specific predatory and defensive venoms gathered from sixteen different cone snail species that belong to eight subgenera: Pionoconus, Chelyconus, Gastridium, Cylinder, Conus, Stephanoconus, Rhizoconus, and Vituliconus. Further studies are needed to help close the gap in our understanding of the evolved ecological roles of many cone snail venom peptides. Introduction Cone snails are specialized carnivorous marine mollusks that can be found in coral reef areas, from shallow intertidal to deeper waters, and spread across the tropical Indian, Pacific, and Atlantic Oceans [1].They are classified as gastropods within the Conidae family, which feature hollow radular teeth and venom glands [2].They use a complex venom mixture to paralyze and hunt fish, mollusks, and worms [3].This venom is secreted through epithelial cells lining the cone's venom gland, which is a long and thin tubular duct [4].A singular radular tooth, analogous to a hypodermic needle, is then moved into the proboscis through which the rapid-acting venom is injected.The venom is acknowledged as a rich source of potent pharmacological components, raising high interest in the drug development field [5]. This venom consists primarily of biologically active peptides, generally characterized as conotoxins or conopeptides.They can be classified into two groups: conotoxins, which are cysteine-rich conopeptides consisting of 10 to 30 amino acids, while conopeptides are cysteine-poor, meaning 1 or no disulfide bond [4][5][6].Moreover, conotoxins are highly structured and often show high affinity and selectivity toward membrane receptors, ion channels, and other transmembrane proteins of the nervous and non-nervous systems [4].Conopeptides include several types of cysteine-poor peptides, such as contulakins, conantokins, conorfamides, conolysins, conophans, conomarphins, contryphans, conopressins, and more recently, hormone-like conopeptides, such as elevenins or prohormones [5,7].Conopeptides are usually minor in comparison to conotoxins in the venom mixture and each presents a selective type of target [7].These small peptides can work as ligands, which induce a physiological reaction by interacting with a given receptor [4].Conotoxins and conopeptides are secreted as peptide precursors, which can be portioned into three characteristic sections: a highly conserved signal peptide, representative of the gene superfamily from which it was translated, a pro-peptide section, and a highly diversified mature peptide (Figure 1).The mature peptide is the active sequence portion, which is enzymatically cleaved and then modified into a highly stable structure within the injected venom [8]. Toxins 2024, 16, x FOR PEER REVIEW 2 of 33 channels, and other transmembrane proteins of the nervous and non-nervous systems [4]. Conopeptides include several types of cysteine-poor peptides, such as contulakins, conantokins, conorfamides, conolysins, conophans, conomarphins, contryphans, conopressins, and more recently, hormone-like conopeptides, such as elevenins or prohormones [5,7].Conopeptides are usually minor in comparison to conotoxins in the venom mixture and each presents a selective type of target [7].These small peptides can work as ligands, which induce a physiological reaction by interacting with a given receptor [4].Conotoxins and conopeptides are secreted as peptide precursors, which can be portioned into three characteristic sections: a highly conserved signal peptide, representative of the gene superfamily from which it was translated, a pro-peptide section, and a highly diversified mature peptide (Figure 1).The mature peptide is the active sequence portion, which is enzymatically cleaved and then modified into a highly stable structure within the injected venom [8].The signal region (framed in blue) presents a sequence of highly conserved residues, mainly hydrophobic, while the mature region (framed in purple) presents more diversity of sequence and a greater number of cysteine residues.Conotoxin precursors: ω-GVIA (Gastridium geographus), ω-SVIA (Pionoconus striatus), ω-CVID (Pionoconus catus), ω-MVIIA (Pionoconus magus), δ-PVIA (Chelyconus purpurascens), and δ-TxVIA (Cylinder textile).The conotoxin precursors were aligned, amino acid residues were highlighted (in purple) according to the conservation, and disulfide bonds are represented with black lines. The cysteine pattern within the conotoxin sequence is designated with roman numerals and it directs the tridimensional structure, which in turn also influences their biological activity.So far, although only few conotoxins have been fully characterized pharmacologically, more than 20 pharmacological targets have been identified.Some of the biological targets involve, for the most part, ion channels, but also some G-protein-coupled receptors and transporters [3].Conotoxins are classified according to their targets into pharmacological families, defined by Greek letters, such as α, δ, µ, ω, κ, γ, etc. (Figure 2) [3].For instance, ω-conotoxins are antagonists of voltage-gated calcium channels, and some are effective against neuropathic pain [3].Such activity was the basis for the development of the first marine-based drug isolated from a cone snail, known as Prialt ® .This drug is a synthetic version of the ω-conotoxin MVIIA isolated from the piscivorous species, Pionoconus magus [9].Likewise, some α-conotoxins have been characterized as nicotinic acetylcholine receptors (nAChRs) antagonists, with some of them having potential in the treatment of pain, cognitive, cardiovascular, and other disorders [9].For the past three decades, research in the field has been mainly focused on finding new ligands for known targets, with a strong emphasis on modulators of pain receptors [9].The signal region (framed in blue) presents a sequence of highly conserved residues, mainly hydrophobic, while the mature region (framed in purple) presents more diversity of sequence and a greater number of cysteine residues.Conotoxin precursors: ω-GVIA (Gastridium geographus), ω-SVIA (Pionoconus striatus), ω-CVID (Pionoconus catus), ω-MVIIA (Pionoconus magus), δ-PVIA (Chelyconus purpurascens), and δ-TxVIA (Cylinder textile).The conotoxin precursors were aligned, amino acid residues were highlighted (in purple) according to the conservation, and disulfide bonds are represented with black lines. The cysteine pattern within the conotoxin sequence is designated with roman numerals and it directs the tridimensional structure, which in turn also influences their biological activity.So far, although only few conotoxins have been fully characterized pharmacologically, more than 20 pharmacological targets have been identified.Some of the biological targets involve, for the most part, ion channels, but also some G-protein-coupled receptors and transporters [3].Conotoxins are classified according to their targets into pharmacological families, defined by Greek letters, such as α, δ, µ, ω, κ, γ, etc. (Figure 2) [3].For instance, ω-conotoxins are antagonists of voltage-gated calcium channels, and some are effective against neuropathic pain [3].Such activity was the basis for the development of the first marine-based drug isolated from a cone snail, known as Prialt ® .This drug is a synthetic version of the ω-conotoxin MVIIA isolated from the piscivorous species, Pionoconus magus [9].Likewise, some α-conotoxins have been characterized as nicotinic acetylcholine receptors (nAChRs) antagonists, with some of them having potential in the treatment of pain, cognitive, cardiovascular, and other disorders [9].For the past three decades, research in the field has been mainly focused on finding new ligands for known targets, with a strong emphasis on modulators of pain receptors [9].A, B, C, D, E, F, G, H, I, J, etc.), and cysteine frameworks by roman numbers (I, II, III, IV, V, VI, etc.).Identified biological targets may be linked to one or several pharmacological families (i.e., voltage-gated Na + channels are targeted by µ-, δ-, and ι-conotoxins) [3,5,10]. The ~800 species of cone snails can be categorized into three main groups according to their diet.Piscivorous species hunt fish, molluscivorous species prey upon mollusks, and vermivorous species feed upon worms (Figure 3).The type of radula tooth seems to be directly correlated to the diet, and this criterion has been used to support the classification of species [11].Based on molecular phylogenetic studies, cone snails have been classified into a single large family, Conidae, which can then be divided into four genera: Conus, Conasprella, Profundiconus, and Californiconus [2].The genus Conus constitutes more than 85% of all cone snail species, which can then be further classified into 57 subgenera or clades' of Conus species, which represent a clear subgrouping within the genera [2].These classifications can provide a better understanding of the "biotic interactions" within Conus species [4].Unfortunately, rather than being tested on biologically relevant animal models, cone snail venoms have almost exclusively been investigated using mammalian bioassays.As a result, the conclusions drawn from these assays should be interpreted with caution when extrapolated to the biology of cone snails.The ~800 species of cone snails can be categorized into three main groups according to their diet.Piscivorous species hunt fish, molluscivorous species prey upon mollusks, and vermivorous species feed upon worms (Figure 3).The type of radula tooth seems to be directly correlated to the diet, and this criterion has been used to support the classification of species [11].Based on molecular phylogenetic studies, cone snails have been classified into a single large family, Conidae, which can then be divided into four genera: Conus, Conasprella, Profundiconus, and Californiconus [2].The genus Conus constitutes more than 85% of all cone snail species, which can then be further classified into 57 subgenera or 'clades' of Conus species, which represent a clear subgrouping within the genera [2].These classifications can provide a better understanding of the "biotic interactions" within Conus species [4].Unfortunately, rather than being tested on biologically relevant animal models, cone snail venoms have almost exclusively been investigated using mammalian bioassays.As a result, the conclusions drawn from these assays should be interpreted with caution when extrapolated to the biology of cone snails. Envenomation Strategies in Cone Snails Piscivorous cone snails exhibit varying types of hunting behaviors.For instance, upon the detection of a prey, first through chemosensory cues [12], some cone snails extend their proboscis in order to inject a paralytic venom (Figure 4A).The venom is injected via a radula tooth that is comparable to a miniature harpoon that the cone snail uses to sting and tether the prey to avoid its escape [4,13].Upon the strike, the prey often displays an immediate tetanic paralysis.The cone snail then retracts its proboscis to drag its victim toward its enlarged rostrum to engulf it [13].The archetype of this behavior is the 'taser-and-tether' strategy employed by the majority of piscivorous species from the Pionoconus, Textilia, and Chelyconus clades, where injection of venom first produces an immediate paralysis ("taser"), followed by the reeling back of the tethered fish into the rostrum via the contraction of the proboscis, which is still tightly grasping the base of the radula tooth [4]. Envenomation Strategies in Cone Snails Piscivorous cone snails exhibit varying types of hunting behaviors.For instance, upon the detection of a prey, first through chemosensory cues [12], some cone snails extend their proboscis in order to inject a paralytic venom (Figure 4A).The venom is injected via a radula tooth that is comparable to a miniature harpoon that the cone snail uses to sting and tether the prey to avoid its escape [4,13].Upon the strike, the prey often displays an immediate tetanic paralysis.The cone snail then retracts its proboscis to drag its victim toward its enlarged rostrum to engulf it [13].The archetype of this behavior is the taserand-tether' strategy employed by the majority of piscivorous species from the Pionoconus, Textilia, and Chelyconus clades, where injection of venom first produces an immediate paralysis ("taser"), followed by the reeling back of the tethered fish into the rostrum via the contraction of the proboscis, which is still tightly grasping the base of the radula tooth [4].Remarkably, some other cone snails have been observed to catch their prey without prior sting.In this case, the cone snail is hypothesized to release a set of toxins in the water, which places the prey into a sedative-sleepy state (Figure 4B).The cone snail then opens its rostrum to engulf it and may proceed to envenomate and predigest the prey [13].Thus, cone snails that use this strategy, named as net-hunting', would supposedly release venom components in the water and inject paralytic peptides, which induces an irreversible neuromuscular paralysis of the captured prey.Lastly, the "strike-and-stalk" envenomation strategy is a variation of the taser-and-tether strategy, where the cone snail strikes a prey without tethering it and engulfs it after immobilization has occurred.The latter strategy remains less studied in terms of the neurobiological mechanism involved [13]. In non-piscivorous cone snails, the hunting behaviors have been much less investigated.For most molluscivorous species observed in captivity or in the wild, the predatory Piscivorous "taser-and-tether" and "net-hunting" strategies.(A) Pionoconus striatus is the prototypical species that uses a "taser-and-tether" strategy.The extended proboscis is reminiscent of a fish line and the radula tooth modified into a mini-harpoon to tether a prey.(B) The net-hunting strategy of a Gastridium geographus implies the extension of its rostrum in order to engulf a school of fish, which are already dazed by the hypothetical release of sedative compounds in the water. Remarkably, some other cone snails have been observed to catch their prey without prior sting.In this case, the cone snail is hypothesized to release a set of toxins in the water, which places the prey into a sedative-sleepy state (Figure 4B).The cone snail then opens its rostrum to engulf it and may proceed to envenomate and predigest the prey [13].Thus, cone snails that use this strategy, named as 'net-hunting', would supposedly release venom components in the water and inject paralytic peptides, which induces an irreversible neuromuscular paralysis of the captured prey.Lastly, the "strike-and-stalk" envenomation strategy is a variation of the taser-and-tether strategy, where the cone snail strikes a prey without tethering it and engulfs it after immobilization has occurred.The latter strategy remains less studied in terms of the neurobiological mechanism involved [13]. In non-piscivorous cone snails, the hunting behaviors have been much less investigated.For most molluscivorous species observed in captivity or in the wild, the predatory strategy involves actively chasing the prey and injecting, multiple times, fine, arrow-like radula teeth into the foot of the prey [14].The firing of the radula tooth is usually accompanied by vigorous pumping of copious amount of venom, which can be seen, when injected in excess, as a whitish cloud escaping out the tip of the proboscis and/or out of the base of the tooth from back pressure [15].In the case of the mass spectrometry (MS) analysis of successive stings by Cylinder textile, modest variations in the venom composition were described [16].The first injection usually stops or slows down the prey but does not completely incapacitate it; therefore, it was suggested that a second, third, or more injections, possibly with different peptides, were needed to eventually overcome the prey. Hunting behaviors for vermivorous species are even more elusive, except for only a few species.Both Stephanoconus imperialis and Stephanoconus regius prey almost exclusively on amphinomid worms ("fireworms").These two species use a prey capture strategy reminiscent of the "taser-and-tether" strategy employed by many piscivorous species.Indeed, the targeted worm is first detected by the chemosensory organs, inducing the extension of a reddish proboscis.The short radula tooth (1-1.5 mm) is then fired and embedded into the worm's body, forcefully pushing through a remarkable quantity of a greening venom (Figure 3C) [9].As described for the fish-hunters, the envenomated prey shows immediate involuntary contractions, leading to incapacitation, and is reeled back into the rostrum.Our personal observations on other vermivorous species often reveal, surprisingly, an apparent venom-less strategy, where the snail directly attempts to swallow the worm through its extended rostrum without prior stinging via the proboscis.One of the most mysterious prey strategies relates to the vermivorous species hunting tube worms, as there is no description in the literature. Reality Check on the Concept of Cabals Early pharmacological characterization of conotoxins from venom gland extracts revealed a variety of targets and modes of action.From the pharmacological effects obtained mostly on mammals, extrapolations were made to explain the effects observed on prey, and this is how the concept of cabals was first crafted.The cabals are defined as a group of (artificially put together) conotoxins, which seem to modulate the same physiological target or may act synergistically.Thus, the "lightening-strike cabal" is defined as a set of κand δ-conotoxins, as well as conkunitzins, which would together elicit an excitatory state on the prey [17,18].This reaction is due, respectively, to the inhibition of K + channels, as well as a delayed inactivation of Na + channels [4]. Meanwhile, the "nirvana cabal" is highly speculative, but could include the release in the surrounding water of a mixture of B1-conotoxins [19] and hormone-like peptides that would induce a "hypoactivity in sensory neuronal circuity" [4,20,21].Although prey capture observations of net-hunting species seem to corroborate this hypothesis, there is currently no direct evidence to support any release of venom into the water.Lastly, an additional "motor cabal" was proposed to be responsible for the final flaccid paralysis that prevents the prey from recovering the initial excitatory shock.The latter involves α-, µ-, and ω-conotoxins that interfere with the neuromuscular junction [18]. Although these cabals were logically formulated, do they actually correspond to the reality of the predatory strategies employed by cone snails to defeat their prey?Nearly thirty years ago, an ingenious procedure, now commonly referred to as "milking", was devised that allows for the collection of the injected venom, providing a direct means of interrogating the conotoxin cocktail used for prey capture [22].Using a live prey to arouse the cone snail and trigger a predatory behavior, a microcentrifuge tube covered with parafilm, and a piece of the prey's tissue, is presented to the tip of the extended proboscis.Sensory cilia at the tip of the proboscis identify the tissue as "prey" and instantaneously trigger the injection of venom through the radula tooth.Such recovered "milked venoms" can now be analyzed, and the composition revealed.Over the last two decades, the more milked venoms were investigated, the less obvious the role of the conotoxins described in these cabals was for prey capture [23]. Overall, in all cases investigated, milked venoms appear significantly less complex compared to dissected gland extracts.For instance, in Pionoconus species, the predatory venom is usually dominated by one class of conotoxins (sometimes the only conotoxins seemingly injected), the κA-conotoxins [24][25][26].Therefore, it appears that κA-conotoxins are responsible for the immediate "taser" effect in this clade, not a combination of κand δ-conotoxins, as originally described for the lightning-strike cabal.Indeed, injection of κA-conotoxins alone into fish recapitulates the tetanic paralysis observed during prey capture [27].However, it has to be noted that intraspecific variations in the injected venom can be dramatic and, occasionally, the paralytic peptides from the "motor cabal" are detected, suggesting that they could play a significant role in prey capture [23].Although not fully explained at the time, one aspect of this diversification was later attributed, at least in part, to the unsuspected ability of some cone snails to produce two types of venoms [28]. Defensive Strategies From the three dozen human deaths reported, it has long been known that cone snails can also inject their venom defensively [29].In the literature, there is only anecdotal information on the natural predators of cone snails, but fish, mollusks (octopi), and some crustaceans are known to prey on them (Figure 5).For instance, a rare species of deep-water cone snail was first described only from a shell recovered from the stomach content of a large fish (personal communication).The defensive use of venom provides an obvious evolutionary advantage.Indeed, avoiding being eaten is one of the most important fitnessrelated criterion for the survival of a species, together with being able to feed and reproduce.In fact, some venomous animals only use their venom defensively (some hymenopterans, fish, etc.), whereas the reverse is not true, suggesting that the defensive use of venom may actually have a stronger evolutionary role than anticipated, possibly more than predation in some cases [30]. Defensive Strategies From the three dozen human deaths reported, it has long been known that cone snails can also inject their venom defensively [29].In the literature, there is only anecdotal information on the natural predators of cone snails, but fish, mollusks (octopi), and some crustaceans are known to prey on them (Figure 5).For instance, a rare species of deep-water cone snail was first described only from a shell recovered from the stomach content of a large fish (personal communication).The defensive use of venom provides an obvious evolutionary advantage.Indeed, avoiding being eaten is one of the most important fitnessrelated criterion for the survival of a species, together with being able to feed and reproduce.In fact, some venomous animals only use their venom defensively (some hymenopterans, fish, etc.), whereas the reverse is not true, suggesting that the defensive use of venom may actually have a stronger evolutionary role than anticipated, possibly more than predation in some cases [30].Thanks to their capacity to defend themselves, some species of cone snails have evolved some unique behaviors.However, for most species, the first line of defense is usually to retract deeply into the shell, which offers a strong and often inviolable fortress (Figure 6C).Others will respond aggressively to any threat by extending their proboscis (Figure 6D).If the threat intensifies, the cone snail will inject venom into the aggressor, but there are also reports of cone snails squirting venom (personal observations).Additional behavioral studies are needed to fully decipher the complex defensive responses displayed by cone snails. The most dangerous species to humans, Gastridium geographus, displays an unusually aggressive behavior, and will readily use its venom defensively when handled.There seems to be a striking relationship between the fragility of the shell (as in the case of Gastridium geographus) and the propensity to use venom defensively.Typically, large vermivorous species will often be unfazed by any threat, being protected by heavily built shells and narrow apertures [28].However, many species were reported to inflict injuries to humans, regardless of their diet, with varied degrees of consequences.From the known human Conus envenomation, various levels of severity were distinguished, from fatal to minor effects, comparable to bee stings, and the most adverse symptoms were attributed to piscivorous cone snails, especially Gastridium geographus [29]. (Figure 6C).Others will respond aggressively to any threat by extending their proboscis (Figure 6D).If the threat intensifies, the cone snail will inject venom into the aggressor, but there are also reports of cone snails squirting venom (personal observations).Additional behavioral studies are needed to fully decipher the complex defensive responses displayed by cone snails.The most dangerous species to humans, Gastridium geographus, displays an unusually aggressive behavior, and will readily use its venom defensively when handled.There seems to be a striking relationship between the fragility of the shell (as in the case of Gastridium geographus) and the propensity to use venom defensively.Typically, large vermivorous species will often be unfazed by any threat, being protected by heavily built shells and narrow apertures [28].However, many species were reported to inflict injuries to humans, regardless of their diet, with varied degrees of consequences.From the known human Conus envenomation, various levels of severity were distinguished, from fatal to minor effects, comparable to bee stings, and the most adverse symptoms were attributed to piscivorous cone snails, especially Gastridium geographus [29]. The first investigation of a defense-evoked venom uncovered an unsuspected twist in cone snail biology [28].Indeed, the defensive venom of Gastridium geographus was highly complex and contained massive amounts of paralytic conotoxins from the "motor cabal", explaining the lethal symptoms in humans, as opposed to the predatory venom, which was devoid of these and instead contained prey-specific conotoxins with no activity on human receptors.Therefore, in this iconic species, paralytic conotoxins directed to the neuromuscular junction are essentially defensive weapons, not part of the prey capture strategy, a result in conflict with the cabal narrative.From this initial discovery, more data on different species were needed to evaluate how widespread this separate evolution of The first investigation of a defense-evoked venom uncovered an unsuspected twist in cone snail biology [28].Indeed, the defensive venom of Gastridium geographus was highly complex and contained massive amounts of paralytic conotoxins from the "motor cabal", explaining the lethal symptoms in humans, as opposed to the predatory venom, which was devoid of these and instead contained prey-specific conotoxins with no activity on human receptors.Therefore, in this iconic species, paralytic conotoxins directed to the neuromuscular junction are essentially defensive weapons, not part of the prey capture strategy, a result in conflict with the cabal narrative.From this initial discovery, more data on different species were needed to evaluate how widespread this separate evolution of predatory and defensive venoms is among cone snail species.Triggering and collecting defensive venom can be achieved through different means, including using a natural predator (i.e., a molluscivorous species, such as Conus marmoreus or Cylinder textile), applying pressure to the shell, or pinching the foot of the cone (Figure 6D) [28]. Overall, the remarkable ability of cone snails to purposefully modify their venom composition upon different triggering stimuli (predatory or defensive) offers novel and unprecedented research opportunities.Indeed, separately collecting each venom type will allow unambiguous interpretation of the ecological and evolutionary roles of each conotoxin.In this review, we will describe the reported predatory-and defense-evoked venoms of 16 species belonging to eight clades of the Conus genus, with three being piscivorous (Pionoconus, Chelyconus, and Gastridium), two molluscivorous (Cylinder and Conus), and three vermivorous (Stephanoconus, Rhizoconus, and Vituliconus), and discuss the work that remains in order to better understand venom-ecology relationships in cone snails. allow unambiguous interpretation of the ecological and evolutionary roles of each conotoxin.In this review, we will describe the reported predatory-and defense-evoked venoms of 16 species belonging to eight clades of the Conus genus, with three being piscivorous (Pionoconus, Chelyconus, and Gastridium), two molluscivorous (Cylinder and Conus), and three vermivorous (Stephanoconus, Rhizoconus, and Vituliconus), and discuss the work that remains in order to better understand venom-ecology relationships in cone snails. , κA-, δ-, κ-, µ-, and ω-conotoxins constitute the major pharmacological families identified in the predatory venom (Figure 8B).κA-conotoxins are the most abundant (relative contribution to the injected venom) (Table 1), but α-conotoxins are the most prevalent (in terms of number of sequences identified) in the predatory venoms of fish-hunting cone snails (Table 2).These conotoxins are especially represented in the Pionoconus and Chelyconus clades, and less in the Gastridium clade.Table 1.κ-Conotoxins identified from predatory and defense venoms of fish-hunting cone snails.Presented here are conotoxins found exclusively in the predation-evoked or in both venoms.Each conotoxin is characterized by its Conus clade, the Conus species in which it was detected, the given name, its sequence, its classification within the gene superfamilies, and the cysteine framework.Cysteine residues are highlighted in red. Clades Conus Species Conotoxins Mature Sequence Gene Superfamily Table 1.κ-Conotoxins identified from predatory and defense venoms of fish-hunting cone snails.Presented here are conotoxins found exclusively in the predation-evoked or in both venoms.Each conotoxin is characterized by its Conus clade, the Conus species in which it was detected, the given name, its sequence, its classification within the gene superfamilies, and the cysteine framework.Cysteine residues are highlighted in red.As mentioned, the superfamily A is the most represented in the predatory venom thanks to κA-conotoxins and α-conotoxins.κA-conotoxins were first discovered in the predatory venom of Pionoconus striatus, with κA-SIVA and κA-SIVB [32], and their short and non-glycosylated equivalent κA-PIVE and κA-PIVF were identified in Chelyconus purpurascens (Table 1) [17,38].Exhaustive investigation of the predatory venom of Pionoconus consors also revealed the importance of κA-conotoxins, with the abundant injection of κA-CcTx and the sequencing of a series of CcTx variants [24,37,50].Although not confirmed, the major compounds found in the predatory venom of another Pionoconus species, Pionoconus magus, were determined within the mass range of κA-conotoxins [40].More recently, these κA-conotoxins were identified abundantly in both predatory and defensive venom of Pionoconus striatus [32] and in the predatory venom of Pionoconus catus [25].Interestingly, a recent study has identified a variant of the glycosylated κA-conotoxins (κA-SIVC) in the predatory venom of specimens of Pionoconus striatus from Mayotte (France), suggesting that geographical variations can be population-specific [36].κA-conotoxins were initially characterized as excitatory peptides that block K + channels, yet controversy remains over the molecular target since Na + channels were also suggested as the targeted receptor [27].Although they uphold the same IV cysteine framework as certain αA-conotoxins, such as α-OIVA (Table 2), their activities are different: the firsts are excitatory while the seconds are not [41].Generally, these κA-conotoxins appear as the major and most abundant component in the predatory venom of Pionoconus species and are likely solely responsible for the rapid immobilization of prey. Exceptionally, δ-conotoxins were also detected in the predatory venom of some fish-hunters of the Pionoconus and Chelyconus clades (Table 3).This family of conotoxins was characterized as voltage-gated sodium channel (VGSC) modulators.Indeed, δconotoxins activate Na v channels via a delay in the inactivation mechanism (Figure 2) [54].δ-Conotoxins, like the κ-conotoxins, are defined as excitatory peptides, which induce the rapid tetanic paralysis in their prey [32].For example, the δ-PVIA isolated from Chelyconus purpurascens was characterized as the "lock-jaw peptide" because it causes a rigid paralysis of the prey, particularly visible around the mouth musculature [55].Again, considering their critical role in the "lightning-strike cabal", these conotoxins are expected to be always injected for prey capture, but it is almost never the case. Table 3. δ-Conotoxins identified from predatory and defense venoms of fish-hunting cone snails.Presented here are conotoxins found exclusively in the predation-evoked venoms.Each conotoxin is characterized by its Conus clade, the Conus species in which it was detected, the given name, its sequence, its classification within the gene superfamilies, and the cysteine framework.Cysteine residues are highlighted in red.Very few µ-conotoxins have been found in the predatory venom of three piscivorous clades: Pionoconus consors, Gastridium geographus, and Chelyconus purpurascens (Table 4).The molecular targets of µ-conotoxins are VGSCs, more precisely Na v channels, which play an important role in the central and peripheral nervous system (CNS and PNS) (Figure 2).However, contrary to δ-conotoxins, they act as blockers of the channel, instead of delaying its inactivation.The µ-conotoxin-GS is the only µ-conotoxin identified from Gastridium geographus predatory venom, and is highly potent on fish, but less on mammalian Nav channels [57].Another example is the µ-CnIIIC, isolated from the venom of Pionoconus consors, from which a synthetic version was commercialized as a cosmetic to smoothen facial lines (XEP TM -018) [58].The µ-CnIIIB was shown more specifically to block tetrodotoxin-resistant (TTX-R) Na channels, where the blockade was observed as "slow and reversible" [59].Another type of µ-conotoxins, the µO-Conotoxins, restrict channel opening, which has been associated with many side effects upon intravenous application, such as paralysis and death of models in inflammatory and neuropathic pain, leading to a discouragement of further research as therapeutic agents [10]. Conus Trace amounts of ω-conotoxins known as "shaker" peptides have been detected in the predatory venoms of some fish-hunters [60].ω-Conotoxins belong to the O1 gene superfamily with a VI/VII framework (Table 5).They are able to selectively block N-type Ca 2+ channels (at the nerve terminals) and prevent the release of important neurotransmitters (such as glutamate, GABA, acetylcholine, dopamine, etc.) [61].They are pore blockers, which means that they can physically block the influx of Ca 2+ ions [62].Upon intracerebral injection, they produce persistent shaking in mice [60].ω-Conotoxins were extensively investigated for their potential in pain treatments, including ω-MVIIA, which is the only FDA-approved conotoxin drug (from Pionoconus magus, known as Ziconotide), as well as ω-CVID (Leconotide) isolated from Pionoconus catus, which is also under development for pain treatment [63].Few ω-conotoxins have been detected in the predatory venom of Pionoconus cone snails, such as Pionoconus striatus (ω-SVIA and ω-SVIB) [39,64], Pionoconus consors (ω-CnVIIA) [37,51], and Pionoconus catus (ω-CVIA) [25,61].Table 4. µ-Conotoxins identified from predatory and defense venoms of fish-hunting cone snails.Presented here are conotoxins found exclusively in the predation-evoked or the defense-evoked venoms.Each conotoxin is characterized by its Conus clade, the Conus species in which it was detected, the given name, its sequence, its classification within the gene superfamilies, and the cysteine framework.Cysteine residues are highlighted in red.Finally, some conopeptides, larger conotoxins, and proteins were also isolated from predatory venoms of fish-hunting cone snails (Table 6).These include contryphans, detected in Gastridium geographus, Chelyconus purpurascens, and Pionoconus striatus.Contryphans are known as Ca 2+ channel modulators.For instance, contryphan-P (Chelyconus purpurascens) was associated with the "stiff-tail" syndrome when injected in mice [54].Contryphan-S was found in the venom of Pionoconus striatus but was annotated as contryphan-G, as they present identical sequences [32].Both contryphan-S and -G seem to be used for preying purposes by Pionoconus striatus and Gastridium geographus [28].Moreover, Gastridium geographus injects non-paralytic compounds, including conopressin, conophysin, contulakin, and conantokin [28,33].Conopressins and conophysins have been shown to have agonist/antagonist activity against vasopressin receptors [65], while conantokin-T was deemed responsible for the sleep-like state of the prey prior to being engulfed, especially for the "net-hunting" cone snails (Figure 4), which is caused by the inhibition of NMDA (N-methyl-D-aspartate) receptors [54,66].Contulakin-G, on the other hand, is a glycopeptide identified as an agonist of neurotensin receptors, which has shown analgesic properties [67].Interestingly, Pionoconus striatus also injects larger peptides, such as the conkunitzins, which have been characterized as voltage-gated K + channel blockers, affiliated with the Shaker potassium channels [68].Lastly, large polypeptides, such as p21a (Chelyconus purpurascens) [69], or con-ikot-ikot (Pionoconus striatus) [70] that inhibits AMPA (α-amino-3-hydroxy-5-methyl-4-isoxazole propionic acid) receptors, as well as hyaluronidases (Pionoconus consors and Chelyconus purpurascens) [37,71], phospholipases A2 (Chelyconus purpurascens) [72], and proteases (Chelyconus purpurascens and Chelyconus ermineus) [73], have been identified in predatory venoms.More high-molecular-weight proteins, such as metalloproteases, are likely to be present, but their role in prey capture remains unclear [74].Table 6.Conopeptides identified from predatory and defense venoms of fish-hunting cone snails.Presented here are conotoxins found exclusively in the predation-evoked , the defense-evoked , or in both venoms.Each conotoxin is characterized by its Conus clade, the Conus species in which it was detected, the given name, its sequence, its classification within the gene superfamilies, and the cysteine framework.Cysteine residues are highlighted in red. Defensive Venom The data on defensive venom are more scarce compared to predatory venoms.Indeed, so far only Pionoconus striatus and Gastridium geographus, as well as Gastridium obscurus, have been investigated regarding their defense strategies [32,33].Six gene superfamilies and three conopeptide classes (Figure 9A) have been identified in the defense venoms.Most of them are A-, O1-, and M-conotoxins found in both Pionoconus and Gastridium clades.In addition, B1-, S-, and T-conotoxins are found exclusively in Gastridium cone snails, as well as conophysins.Moreover, Pionoconus striatus also injected conkunitzin and con-ikot-ikot (Table 6). An unusual conotoxin, σ-GVIIIA, was identified in the defense venom of Gastridium geographus (Table 7) [28,33].This toxin is a σS-conotoxin presenting a VIII cysteine framework and has been characterized as a blocker of the 5-HT3 serotonin receptor ("involved in the inhibition of neurotransmitter release at motor and sensory synapses") [76].Additional conopeptides, such as conophysin-G [28], were also identified in the defensive strategies.Interestingly, Gastridium geographus also employs two unclassified conotoxins G5.1 and a so-called "scratcher peptide", which was named from the "scratching" symptoms Table 7. σ-Conotoxin identified from predatory venoms of a fish-hunting cone snail (Gastridium geographus).Presented here are conotoxins found exclusively in the defense-evoked venom.Each conotoxin is characterized by its Conus clade, the Conus species in which it was detected, the given name, its sequence, its classification within the gene superfamilies, and the cysteine framework.Cysteine residues are highlighted in red. Clades Conus Species Conotoxins Mature Sequence Gene Superfamily Predatory Venom A little over 60 cone snails have been identified as mollusc-hunters, which represents a smaller portion over the fish-and worm-hunting species [31].It is important to note that some cone snails do not exclusively prey on a single type of mollusc prey, but have adapted to more than one [31].Although a complete genome assembly from mollusc-hunters is still lacking, it is fairly possible that mollusc-hunting cone snails have originated from a single root in opposition to piscivorous cone snails, which have been shown to be polytopic [31].The Cylinder and Conus clades represent the most studied in general, as they regroup some of the larger cone snails, such as the common Cylinder textile and Conus marmoreus.Fewer data are found in comparison to the extensively studied piscivorous cone snails, and even less on the predatory and defensive differentiation of venom for mollusc-hunters, limited so far to three species from the Cylinder clade (ammiralis, textile, and victoriae), as well as one from the Conus clades (marmoreus) (Figure 10). A little over 60 cone snails have been identified as mollusc-hunters, which represents a smaller portion over the fish-and worm-hunting species [31].It is important to note that some cone snails do not exclusively prey on a single type of mollusc prey, but have adapted to more than one [31].Although a complete genome assembly from molluschunters is still lacking, it is fairly possible that mollusc-hunting cone snails have originated from a single root in opposition to piscivorous cone snails, which have been shown to be polytopic [31].The Cylinder and Conus clades represent the most studied in general, as they regroup some of the larger cone snails, such as the common Cylinder textile and Conus marmoreus.Fewer data are found in comparison to the extensively studied piscivorous cone snails, and even less on the predatory and defensive differentiation of venom for mollusc-hunters, limited so far to three species from the Cylinder clade (ammiralis, textile, and victoriae), as well as one from the Conus clades (marmoreus) (Figure 10).Compared to piscivorous species, the predatory-evoked venom of mollusc-hunters shows high complexity.Analyses of Conus and Cylinder species have permitted the identification of 17 gene superfamilies, 7 conopeptide classes, and a few unclassified toxins (Figure 11).Within this high diversity of conotoxins injected, M-, O1, T, and O2 superfamilies dominate the predatory venoms in general, followed by I1, A, I2, and H superfamilies. Additionally, a few conopeptide classes, such as insulins, contryphans, conophysins/conopressins, and conorfamides, were also identified, as well as less common classes, such as conomarphins, elevin, and even prohormones (Figure 11).Compared to piscivorous species, the predatory-evoked venom of mollusc-hunters shows high complexity.Analyses of Conus and Cylinder species have permitted the identification of 17 gene superfamilies, 7 conopeptide classes, and a few unclassified toxins (Figure 11).Within this high diversity of conotoxins injected, M-, O1, T, and O2 superfamilies dominate the predatory venoms in general, followed by I1, A, I2, and H superfamilies. Additionally, a few conopeptide classes, such as insulins, contryphans, conophysins/conopressins, and conorfamides, were also identified, as well as less common classes, such as conomarphins, elevin, and even prohormones (Figure 11). Although the venom of a single cone snail from the Conus clade was characterized (Conus marmoreus), it provided almost half of the knowledge gathered on the predatory strategy of mollusc-hunters.Indeed, the Conus clade predatory venom comprises 14 out of a total of 24 gene superfamilies.Meanwhile, the 3 Cylinder cone snails together yielded only slightly more diversity in their predatory venoms, with 20 gene superfamilies.From these, two gene superfamilies and types of conopeptides are found exclusively in the predatory venom of Conus marmoreus (respectively, B2 and I2, and conomarphins and contryphans), while Cylinder cone snails present exclusively five gene superfamilies and five types of conopeptide (F, P, R, elevin, and prohormone in Cylinder victoriae [28,78,79], and I4, J, conophysin/conopressin, conorfamide, and insulin in Cylinder ammiralis [80]) (Figure 11). Superfamily A conotoxins are found in both molluscivorous clades investigated but they appear not as prevalent as in the piscivorous cone snails (Table 8).Some of these conotoxins have a similar structure and function as the α-conotoxins from fish-hunters, such as α-Mr1.1 (Conus marmoreus) [81] and α-VcIA (Cylinder victoriae) [28].A very recent proteomic study has been conducted to uncover the predatory and defensive venoms of Cylinder ammiralis, which allowed the characterization of new sets of conotoxins and conopeptides, with some still unknown [80].The predatory venom of Cylinder ammiralis shows α-like conotoxins, Ai1.2 and Ai1.2, but they have not been characterized yet [80].Although the venom of a single cone snail from the Conus clade was characterized (Conus marmoreus), it provided almost half of the knowledge gathered on the predatory strategy of mollusc-hunters.Indeed, the Conus clade predatory venom comprises 14 out of a total of 24 gene superfamilies.Meanwhile, the 3 Cylinder cone snails together yielded only slightly more diversity in their predatory venoms, with 20 gene superfamilies.From these, two gene superfamilies and types of conopeptides are found exclusively in the predatory venom of Conus marmoreus (respectively, B2 and I2, and conomarphins and contryphans), while Cylinder cone snails present exclusively five gene superfamilies and five types of conopeptide (F, P, R, elevin, and prohormone in Cylinder victoriae [28,78,79], and I4, J, conophysin/conopressin, conorfamide, and insulin in Cylinder ammiralis [80]) (Figure 11). Superfamily A conotoxins are found in both molluscivorous clades investigated but they appear not as prevalent as in the piscivorous cone snails (Table 8).Some of these conotoxins have a similar structure and function as the α-conotoxins from fish-hunters, such as α-Mr1.1 (Conus marmoreus) [81] and α-VcIA (Cylinder victoriae) [28].A very recent proteomic study has been conducted to uncover the predatory and defensive venoms of Cylinder ammiralis, which allowed the characterization of new sets of conotoxins and conopeptides, with some still unknown [80].The predatory venom of Cylinder ammiralis shows α-like conotoxins, Ai1.2 and Ai1.2, but they have not been characterized yet [80].Table 8.A-conotoxins identified from predatory and defense venoms of mollusc-hunting cone snails.Presented here are conotoxins found exclusively in the predation-evoked or in both venoms.Each conotoxin is characterized by its Conus clade, the Conus species in which it was detected, the given name, its sequence, and the cysteine framework.Cysteine residues are highlighted in red.As mentioned, M-conotoxins are of the most abundant in predatory venoms as well as O1-and T-conotoxins (Figure 11).M-conotoxins (Table 9), such as MrIIIB (Conus marmoreus) and TxIIIC (Cylinder textile), seemed to induce several symptoms in mice upon intracranial injection, such as scratching, hyperactivity, and circular motion [82], while Mr1e induced excitatory effects [83].However, it is unknown if these effects reported in mammalian animal models can be translated to mollusc prey physiology. Conus Table 9. M-conotoxins identified from predatory and defense venoms of mollusc-hunting cone snails.Presented here are conotoxins found exclusively in the predation-evoked or in both predatory and defense venoms.Cysteine residues are highlighted in red.The complete table can be found in the Supplementary Table S1.O1-Conotoxins are also highly produced in the predatory venom of both clades.Most of them uphold the classical VI/VII framework (Table 10).Among them, we find µ-conotoxins such as µ-MrVIA and µ-MrVIB [81] from Conus marmoreus, which block Na + currents of VGSCs [84].The sequence homology can be clearly seen between Mconotoxins produced by the different species of mollusk-hunters, for example between TxO1 (Cylinder textile) [16] and P_019 (Cylinder ammiralis), with this conservation indicating a likely important ecological role [80]. Clades Table 10.O1-conotoxins identified from predatory-evoked venoms of mollusc-hunting cone snails.Presented here are conotoxins found exclusively in the predation-evoked , the defense-evoked , or in both venoms.Each conotoxin is characterized by its Conus clade, the Conus species in which it was detected, the given name, its sequence, and the cysteine framework.Cysteine residues are highlighted in red.The complete table can be found in the Supplementary Table S1.A few O2-conotoxins were identified from the predatory venom of both Cylinder ammiralis Conus marmoreus.These include a majority of almost 30 amino-acid long conotoxins, upholding a VI/VII cysteine framework.In addition, a contryphan-like conopeptide (P_163) was also identified in the venom of Cylinder ammiralis (Table 11) [80]. Clades Table 11.O2-conotoxins identified from predatory and defense venoms of mollusc-hunting cone snails.Presented here are conotoxins found exclusively in the predation-evoked , the defense-evoked , or in both venoms.Cysteine residues are highlighted in red.The complete table can be found in the Supplementary Table S1.One other major family identified in the predation of molluscivorous cone snails is the T superfamily, which is absent from piscivorous species (Table 12).The T-conotoxins are smaller peptides that uphold a V or X framework made of four cysteine residues.Only a few χ-conotoxins have been identified, some of which inhibit the antidepressant binding site of the NE transporter (dependent on Na + ), such as χ-MrIA [81], in Conus marmoreus.Interestingly, χ-MrIA was identified as an analgesic, efficacious in managing neuropathic pain in mice experiments [81]. Clades Table 12.T-conotoxins identified from predatory and defense venoms of mollusc-hunting cone snails.Presented here are conotoxins found exclusively in the predation-evoked or both predatory-and defense-evoked venoms.Cysteine residues are highlighted in red.The complete table can be found in the Supplementary Table S1. Defense Venom The defensive venom of molluscivorous cone snails has only been described for three species: Conus marmoreus, Cylinder ammiralis, and Cylinder victoriae.Similar to the predatory venoms, defensive venoms are complex, and a high diversity of conotoxin gene superfamilies are detected.A total of 21 gene superfamilies were identified: A, H, F, I1, I2, J, M, N, O1, O2, P, R, S, T, U, contryphan, conoporin, conodipine, conopressin/conophysin, conorfamide, and prohormones (Figure 12).The majority of the conotoxins recovered belong to M, O1, O2, and T superfamilies, which is reminiscent of the predatory venoms, although the nature of the individual conotoxin sequences is often different (Figure 11). Predatory Venom While they present the highest number of species within the Conidae, vermivorous cone snails have the least amount of published data on their predatory and defensive venoms.Most likely because of the difficulty in collecting the injected venom (small radula M-Conotoxins included a few toxins that were used in both predatory and defense venoms (Table 9), such as Mr1e, Mr3.8, MrIIIB, MrIIID, MrIIIE, MrIIIF, and MrIIIG in Conus marmoreus, as well as P_147 and P_063 in Cylinder ammiralis (Table 9).In the same way, O1-and T-conotoxins included toxins that are shared in both strategies, such as the µOconotoxins, µ-MrVIA and µ-MrVIB, and VcVIB (Table 10), and the T-conotoxins, χ-MrIA and VcVA (Table 12).Some O2-conotoxins, only identified from the predatory venom of Cylinder ammiralis, included contryphan-like toxins, such as D_054 and P_163, which show similarities with contryphan-M (Table 11) [80].Conopeptides are also found in the defensive venom, which include contryphan-M, and conoporin, conorfamide, conodipine, conopressin/conophysin, and prohormones (Table 13). Predatory Venom While they present the highest number of species within the Conidae, vermivorous cone snails have the least amount of published data on their predatory and defensive venoms.Most likely because of the difficulty in collecting the injected venom (small radula tooth, specific type of prey worms, etc.), the investigation of vermivorous cone snails is lagging behind [31].Consequently, this lack of investigation impacted the study of their predatory and defensive venoms.A few cone snails have been studied with that intent, such as the Stephanoconus/Rhombiconus cone snail, Stephanoconus imperialis [9], as well as two Rhizoconus cone snails, Rhizoconus vexillum and Rhizoconus capitaneus [86], and a single Vituliconus cone, Vituliconus planorbis [87] (Figure 13). Toxins 2024, 16, x FOR PEER REVIEW 24 of 33 tooth, specific type of prey worms, etc.), the investigation of vermivorous cone snails is lagging behind [31].Consequently, this lack of investigation impacted the study of their predatory and defensive venoms.A few cone snails have been studied with that intent, such as the Stephanoconus/Rhombiconus cone snail, Stephanoconus imperialis [9], as well as two Rhizoconus cone snails, Rhizoconus vexillum and Rhizoconus capitaneus [86], and a single Vituliconus cone, Vituliconus planorbis [87] (Figure 13).So far, from the worm-hunters, only the predatory venom of Stephanoconus imperialis could be collected.Its analysis has revealed a moderately complex venom, which presents 12 gene superfamilies (Figure 14) [9].Interestingly, we observe conotoxins with different types of cysteine frameworks, which differ from the other types of cone snails (Table 14).Overall, the K-conotoxins were the most abundant in the venom of Stephanoconus imperialis.Indeed, three K-conotoxins, Im23a, Im23b, and Im23.4,were identified with a XXIII framework and a structure containing two helices [9].Their biological activity is unknown.An α-conotoxin, Im1.1, was also identified.So far, from the worm-hunters, only the predatory venom of Stephanoconus imperialis could be collected.Its analysis has revealed a moderately complex venom, which presents 12 gene superfamilies (Figure 14) [9].Interestingly, we observe conotoxins with different types of cysteine frameworks, which differ from the other types of cone snails (Table 14).Overall, the K-conotoxins were the most abundant in the venom of Stephanoconus imperialis.Indeed, three K-conotoxins, Im23a, Im23b, and Im23.4,were identified with a XXIII framework and a structure containing two helices [9].Their biological activity is unknown.An α-conotoxin, Im1.1, was also identified. predatory and defensive venoms.A few cone snails have been studied with that intent, such as the Stephanoconus/Rhombiconus cone snail, Stephanoconus imperialis [9], as well as two Rhizoconus cone snails, Rhizoconus vexillum and Rhizoconus capitaneus [86], and a single Vituliconus cone, Vituliconus planorbis [87] (Figure 13).So far, from the worm-hunters, only the predatory venom of Stephanoconus imperialis could be collected.Its analysis has revealed a moderately complex venom, which presents 12 gene superfamilies (Figure 14) [9].Interestingly, we observe conotoxins with different types of cysteine frameworks, which differ from the other types of cone snails (Table 14).Overall, the K-conotoxins were the most abundant in the venom of Stephanoconus imperialis.Indeed, three K-conotoxins, Im23a, Im23b, and Im23.4,were identified with a XXIII framework and a structure containing two helices [9].Their biological activity is unknown.An α-conotoxin, Im1.1, was also identified. Defense Venom Defense venoms have been recorded from vermivorous Rhizoconus and Vituliconus cone snails (Figure 15).First, defense venom of the Vituliconus cone snail, Vituliconus planorbis, presented a set of A-, J-, M-, O2-, T-, Y-, and a new superfamily-1 (NSf-1).The most represented were NSf-1, A, and M gene superfamilies (Table 14) [87].Rhizoconus worm-hunting cone snails appear to use massively and almost exclusively homodimeric αD-conotoxins for their defense strategies (Table 15).Indeed, αD-conotoxins, presenting 10 cysteines arranged in a unique XX framework, were discovered in the defensive-evoked venom of both Rhizoconus vexillum and Rhizoconus capitaneus [86].They can be detected in their dimeric form between 10 and 12 kDa.Similar to some defense-related αA-conotoxins, they potently inhibit the α7 nAChR subtype in mammalian assays [86], but more information is needed to understand their defensive role [86].Table 15.αD-conotoxins identified from predatory and defense venoms of worm-hunting cone snails.Presented here are conotoxins found exclusively in the predation-evoked or the defense-evoked venoms.Cysteine residues are highlighted in red.The complete table can be found in the Supplementary Table S1. Discussion When first discovered, the remarkable ability of cone snails to produce different venoms for specific purposes (predation or defense) was totally unexpected.For the first time, within the same animal, was shown that a defensive injection was not simply equivalent to a predatory sting, and vice versa.While this novel paradigm would have immediate implications for the management of envenomation victims for instance, or for our understanding of cone snail biology, it also argues for the future need to characterize individual conotoxins on the correct animal model to avoid erroneous conclusions on their true ecological role.In the case of vertebrate venomous animals, such as snakes, both preys (often small vertebrate mammals, but also reptiles and birds) and predators (mostly higher vertebrates) share a very conserved physiology, explaining how by a fortunate coincidence, the same toxins would be effective both in capturing prey and in defending against a predator [88].To the contrary, cone snails had to deal with more complicated venom uses (very diverse vertebrate/invertebrate preys and predators), stimulating the evolution of distinct strategies [89].We, therefore, encourage future studies to investigate predatory venoms on laboratory animals more closely related to prey types, such as zebrafish (piscivorous), Lymnaea or Aplysia snails (molluscivorous), and any worms (vermivorous), including Caenorhabditis elegans, although annelids would be preferable over nematodes. The controlled and selective injection of different conotoxin cocktails suggest that the evolution of cone snail venom is not only under the direct influence of one but at least two, potentially equally important, driving forces: predation and defense.Therefore, it provides a unique opportunity to study venom-ecology relationships in unprecedented details, on the condition that both predatory and defensive venoms can be collected and analyzed separately.From the current literature, it appears that injected venom can theoretically be collected from all cone snail diet groups.However, this review exposes a clear bias toward fish-hunting species in comparison to the other feeding groups.This bias likely has to do with the ease of "milking", as piscivorous species tend to use the large and strong radula tooth, which facilitates the collection procedure.In addition, a fish prey can be conveniently obtained in most laboratories, as piscivorous cone snails will happily accept freshwater fish (goldfish) or zebrafish (with the caveat of ethical considerations around using vertebrate animals).Molluscivorous species also use a large radula tooth, which are thinner and more flexible, rendering the milking more technical but still relatively simple.The real challenge comes with the tiny radula tooth of most worm-hunting species (often <1 mm), which rarely pierces through both the prey tissue and the parafilm for successful milking (personal experiences). An important note relates to the description of injected venoms prior to 2014 and the knowledge about distinct predatory and defensive venoms in cone snails.Indeed, some of the variations observed in venom composition in "milked venoms" could be a posteriori attributed to the non-discrimination between predatory and defensive behaviors.Another area of discrepancy concerns the study of individual vs. pooled injected venoms.From personal experience, pooling large batches of injected venom could also "falsely" increase venom complexity (accumulating all subtle individual variations in one complex lot) and, more importantly, could also lead to inadvertent mixing of defensive and predatory venoms.Therefore, for pharmacological characterization of conotoxins, larger amounts of venom are required, and pooling can be acceptable in this case, but investigation of precise envenomation strategies should be limited to individual milking.Lastly, the use of proteomic softwares that allow the automated interpretation of MS/MS spectra and identification of peptides and proteins from sequence databases may generate substantial amounts of false positives if not carefully curated [90].Therefore, these could also artificially inflate the number of conotoxin sequences identified in injected venoms. With these considerations and limitations in mind, a high diversity of conotoxin gene superfamilies was identified in the composition of predatory and defensive venoms to date (summarized in Figure 16).However, it is puzzling that particular species can inject a very restricted set of conotoxins selected from the complex repertoire present in their venom duct.For instance, the piscivorous species of the Pionoconus clade often rely on a very simple predatory venom composition to subdue fish prey, with sometimes only one class of conotoxin injected (κA-conotoxins).Similarly, some vermivorous species of the Rhizoconus clade have evolved a defensive strategy almost exclusively centered around the injection of αD-conotoxins.The conotoxin selection mechanisms are not understood, but likely under the control of the nervous and/or hormonal systems.Importantly, some species from more basal clades show a non-differentiated venom duct, suggesting that they may not have the ability to produce two types of venom, although this remains to be experimentally demonstrated [91].Ideally, further works on cone snails should systematically include the characterization of predatory and defense venoms when possible.The more data collected on different clades and diet types, the closer we will come to truly understand the intended evolved venom use and the ecological role of each conotoxin.For instance, no data are available for the predatory venom of any Textilia species, which may seem surprising, considering the piscivorous diet, large radula tooth, and relatively wide geographical distribution of some species in this clade, such as Textilia bullatus.Moreover, the characterization of the Ideally, further works on cone snails should systematically include the characterization of predatory and defense venoms when possible.The more data collected on different clades and diet types, the closer we will come to truly understand the intended evolved venom use and the ecological role of each conotoxin.For instance, no data are available for the predatory venom of any Textilia species, which may seem surprising, considering the piscivorous diet, large radula tooth, and relatively wide geographical distribution of some species in this clade, such as Textilia bullatus.Moreover, the characterization of the defense venoms is still lacking for many cone snail clades, especially for common and large molluscivorous, such as Cylinder textile, episcopatus, aulicus, or canonicus, and other members of the molluscivorous Conus clade (e.g., Conus bandanus or Conus araneosus).Another future line of research should investigate the possible use of small molecules in either predatory or defensive (or both) venoms [92].Indeed, recent publications have focused on the characterization of non-peptidic components in the venom of cone snails [93].Some of these small molecules were even ascertained to have a direct role in the preycapture strategy of a vermivorous species on the basis of their structural and functional resemblance to polychaete mating pheromones [94].However, critically, the role of small molecules in the ecology of cone snails can only be considered relative to their presence (or absence) in the injected venoms. With the advent of more and more sophisticated tools to investigate venom, including those in the field of mass spectrometry, peptide synthesis, and high-throughput sequencing and screenings, novel details about the predatory and defensive strategies in cone snails are likely to be revealed in the near future [95].This has also allowed the uprising of multi-omics strategies known as "venomics" by integrating genomics, proteomics, and transcriptomics to accelerate the characterization of complex venoms [56].For instance, it might be possible to test the hypothesis of the release of venom components in the water in the case of piscivorous net-hunting species.Furthermore, new AI (artificial intelligence)based bioinformatic tools may also provide assistance in the interpretation of "big data", in visualizing conotoxin-receptor interactions at atomic levels, or in developing novel hypotheses.However, the main challenge will remain uncovering the pharmacology of conotoxins through prey-and predator-relevant bioassays. Figure 3 . Figure 3. Major diet types observed in cone snails.(A) The piscivorous diet is represented here with a Pionoconus striatus specimen, which uses a "taser-and-tether" strategy to subdue its fish prey.The radula tooth is modified into a mini-harpoon.(B) Cylinder ammiralis is a molluscivorous species that injects thick venom multiple times through fine and long arrow-like radula teeth to incapacitate its gastropod prey.(C) Stephanoconus imperialis, which preys exclusively on amphinomid worms, uses a short and stout radula tooth to forcefully inject its greenish venom in large quantities.Horizontal bars indicate 1 mm. Figure 3 . 33 Figure 4 . Figure 3. Major diet types observed in cone snails.(A) The piscivorous diet is represented here with a Pionoconus striatus specimen, which uses a "taser-and-tether" strategy to subdue its fish prey.The radula tooth is modified into a mini-harpoon.(B) Cylinder ammiralis is a molluscivorous species that injects thick venom multiple times through fine and long arrow-like radula teeth to incapacitate its gastropod prey.(C) Stephanoconus imperialis, which preys exclusively on amphinomid worms, uses a short and stout radula tooth to forcefully inject its greenish venom in large quantities.Horizontal bars indicate 1 mm.Toxins 2024, 16, x FOR PEER REVIEW 5 of 33 Figure 4 . Figure 4.Piscivorous "taser-and-tether" and "net-hunting" strategies.(A) Pionoconus striatus is the prototypical species that uses a "taser-and-tether" strategy.The extended proboscis is reminiscent of a fish line and the radula tooth modified into a mini-harpoon to tether a prey.(B) The net-hunting strategy of a Gastridium geographus implies the extension of its rostrum in order to engulf a school of fish, which are already dazed by the hypothetical release of sedative compounds in the water. Figure 4 . Figure 4.Piscivorous "taser-and-tether" and "net-hunting" strategies.(A) Pionoconus striatus is the prototypical species that uses a "taser-and-tether" strategy.The extended proboscis is reminiscent of a fish line and the radula tooth modified into a mini-harpoon to tether a prey.(B) The net-hunting strategy of a Gastridium geographus implies the extension of its rostrum in order to engulf a school of fish, which are already dazed by the hypothetical release of sedative compounds in the water. Figure 5 . Figure 5. Natural predators of cone snails.The left panel shows the known predators of cone snails, whereas on the right is an example of the damages caused by a crab that was held in captivity together with various mollusks, including cone snails. Figure 5 . Figure 5. Natural predators of cone snails.The left panel shows the known predators of cone snails, whereas on the right is an example of the damages caused by a crab that was held in captivity together with various mollusks, including cone snails. Figure 6 . Figure 6.The defensive behaviors of cone snails.A defensive reaction can be triggered by different means, including using a natural predator (A,B) or aggravating the animal by directly interacting with it (C) or applying pressure to the shell (D).Live cone snails should not be handled. Figure 6 . Figure 6.The defensive behaviors of cone snails.A defensive reaction can be triggered by different means, including using a natural predator (A,B) or aggravating the animal by directly interacting with it (C) or applying pressure to the shell (D).Live cone snails should not be handled. Figure 8 . Figure 8. Gene superfamilies (A) and pharmacological families (B) identified within the predatoryevoked venoms of piscivorous cone snails. Figure 10 . Figure 10.Shells of some of the molluscivorous cone snails that have been characterized at the peptide level according to their clade.Conus marmoreus, Cylinder ammiralis, Cylinder textile, and Cylinder victoriae [34]. Figure 10 . Figure 10.Shells of some of the molluscivorous cone snails that have been characterized at the peptide level according to their clade.Conus marmoreus, Cylinder ammiralis, Cylinder textile, and Cylinder victoriae [34]. Author Contributions: Z.R.; writing-original draft preparation, S.D. and N.I.; writing-review and editing, S.D. and N.I.; supervision, S.D. and N.I.; project administration, S.D. and N.I.; funding acquisition, S.D. and N.I.All authors have read and agreed to the published version of the manuscript.Funding: This research was funded by the Departmental Council of Mayotte through the projects VENCOME and "Chercheuse d'avenir", promoting Mahoran women in research (PhD thesis funding for Z.R.).Contribution to the funding: Dhahabia Chanfi (direction of school politics and universities) and Fahoulia Mohamadi (representative of the research and innovation at the teaching authority, 'rectorat de Mayotte').Institutional Review Board Statement: Not applicable.Informed Consent Statement: Not applicable. Table 2 . α-Conotoxins identified from predatory and defense venoms of fish-hunting cone snails.Presented here are conotoxins found exclusively in the predation-evoked , the defense-evoked , or in both venoms.Each conotoxin is characterized by its Conus clade, the Conus species in which it was detected, the given name, its sequence, its classification within the gene superfamilies, and the cysteine framework.Cysteine residues are highlighted in red. Table 5 . ω-Conotoxins identified from predatory and defense venoms of fish-hunting cone snails.Presented here are conotoxins found exclusively in the predation-evoked , the defense-evoked , or in both venoms.Each conotoxin is characterized by its Conus clade, the Conus species in which it was detected, the given name, its sequence, its classification within the gene superfamilies, and the cysteine framework.Cysteine residues are highlighted in red. Table 14 . Various conotoxins identified from predatory and defense venoms of worm-hunting cone snails.Presented here are conotoxins found exclusively in the predation-evoked or the defense-evoked venoms.Cysteine residues are highlighted in red.The complete table can be found in the Supplementary TableS1. Table 15 . αD-conotoxins identified from predatory and defense venoms of worm-hunting cone snails.Presented here are conotoxins found exclusively in the predation-evoked or the defenseevoked venoms.Cysteine residues are highlighted in red.The complete table can be found in the Supplementary Tables.
2024-02-09T16:06:45.615Z
2024-02-01T00:00:00.000
{ "year": 2024, "sha1": "d4d7132757639d3b897cc63d17f38c12158ae38f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6651/16/2/94/pdf?version=1707297125", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9534722e913e999c9f373507cab89c02d7a58b3d", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [] }
12376690
pes2o/s2orc
v3-fos-license
Veterinary Hospital Dissemination of CTX-M-15 Extended-Spectrum Beta-Lactamase–Producing Escherichia coli ST410 in the United Kingdom We characterized extended-spectrum beta-lactamases (ESBLs) and plasmid-mediated quinolone resistance (PMQR) in 32 Escherichia coli extended spectrum cephalosporin (ESC)-resistant clinical isolates from UK companion animals from several clinics. In addition, to investigate the possible dissemination of ESBL clinical isolates within a veterinary hospital, two ESBL-producing E. coli isolates from a dog with septic peritonitis and a cluster of environmental ESC-resistant E. coli isolates obtained from the same clinic and during the same time period, as these two particular ESBL-positive clinical isolates, were also included in the study. Molecular characterization identified blaCTX-M to be the most prevalent gene in ESC-resistant isolates, where 66% and 27% of clinical isolates carried blaCTX-M-15 and blaCTX-M-14, respectively. The only PMQR gene detected was aac(6')-Ib-cr, being found in 34% of the ESC E. coli isolates and was associated with the carriage of blaCTX-M-15. The clinical and environmental isolates investigated for hospital dissemination had a common ESBL/AmpC phenotype, carried blaCTX-M-15, and co-harbored blaOXA-1, blaTEM-1, blaCMY-2, and aac(6')-Ib-cr. Multilocus sequence typing identified them all as ST410, while pulse-field gel electrophoresis demonstrated 100% homology of clinical and environmental isolates, suggesting hospital environmental dissemination of CTX-M-15–producing E. coli ST410. Introduction E scherichia coli are opportunistic pathogens in humans and companion animals and can be associated with a variety of extraintestinal infections, which may require antimicrobial therapy. 1 Increased use of antimicrobials in companion animals may select for antimicrobial-resistant bacteria, and concerns have been raised that these host species may act as potential reservoirs for human infections. 2 Particularly concerning for both human and veterinary health is the increasing resistance to extended spectrum cephalosporins (ESCs) through the production of extendedspectrum beta-lactamases (ESBLs). [3][4][5][6] There is increasing evidence that E. coli-producing ESBL and/or AmpC betalactamases are emerging in companion animals, 7,8 setting new challenges for veterinary practitioners due to therapeutic and infection control implications. Several studies have described the prevalence of ESBL resistance in bacteria from companion animals, 3,4,[9][10][11] but the role that these animals may play in the spread of such resistant bacteria or determinants, is not yet fully determined. Previous molecular studies have shown that, in human hospital settings, ESBL genetic determinants have the potential to spread either through clonal dissemination or plasmid transfer, posing a serious threat to patient care and safety. 6,12 The development of large veterinary hospitals with intensive care facilities has created similar conditions for the emergence of animal hospital acquired infections and a few studies have shown the association of multidrug resistant (MDR) organisms, such as E. coli, Acinetobacter baumannii, Enterobacter spp., and Enterococcus spp., with animal nosocomial infections. [13][14][15][16][17][18][19] However, with the exception of a few recent studies showing the hospital acquisition and/or dissemination of betalactamases or ESBL-producing Klebsiella pneumoniae and A. baumannii [20][21][22][23] and compared with the wealth of data from human medicine, there is a paucity of studies investigating the potential of ESBL-producing E. coli to spread and cause nosocomial infections in veterinary clinics or hospital settings. The aim of this study was dual; first, to characterize ESBL and plasmid-mediated quinolone resistance (PMQR) genes in E. coli from clinical specimens submitted for routine bacterial culture to the Veterinary Microbiology Diagnostics laboratory in the Liverpool School of Veterinary Science. Second, to analyze and compare a cluster of clinical and environmental ESBL-producing E. coli obtained from a UK Veterinary Hospital to identify the potential of clinical isolates to spread within such environments. Bacterial isolates All E. coli isolates were obtained from companion animal clinical specimens submitted from a veterinary hospital and a number of small veterinary clinics and collected for this study between January 2010 and November 2011. Clinical specimens were plated out aerobically on 5% sheep blood agar (Oxoid) and incubated for 24 hours at 37°C. Clinical isolates presumptively identified as E. coli based on a positive reaction on Eosin Methylene Blue Agar (EMBA; Oxoid, Basingstoke, UK) and which showed reduced susceptibility to cefpodoxime (10 mg) and/or cefoxitin (30 mg), used as indicators for ESBL and AmpC production, were selected for this study. In addition, samples from active bacterial environmental surveillance, which is also offered for veterinary hospitals as part of the Diagnostic Service, are also processed by the laboratory, and a cluster of hospital environmental E. coli isolates was also included in this study. Detection of resistance to ESC in environmental E. coli isolates followed the same protocols as for clinical isolates. The identification of clinical and environmental isolates was performed using API 20E Identification Kits (bioMerieux, France) and also by PCR detection of the uidA gene for confirmation of E. coli. 24 Antimicrobial susceptibility testing Susceptibility testing was performed by disc diffusion to representatives of beta-lactam and non-beta-lactam antimicrobial classes on ISO-Sensitest agar (Oxoid, Basingstoke, UK) and results were interpreted according to the BSAC (British Society for Antimicrobial Chemotherapy) interpretative criteria. 25 E. coli ATCC 25922 was used as control strain. All isolates were tested for ESBL production by the double disc synergy test (DDST). 26 Characterization of ESBL and other resistance genes Cell lysates obtained from all investigated isolates were screened by PCR and DNA sequencing for the presence of bla CTX-M , bla SHV , bla TEM , bla OXA , plasmid-mediated bla AmpC variants , PMQR genes qnrA, B, S , as well as the cr variant of aac(6')-Ib , as previously described. [27][28][29][30][31] Specific PCR assays were performed to identify the possible association of bla CTX-M-15 with ISEcp1 or IS26 insertion elements, which have been shown to be involved in the mobilization and expression of bla CTX-M genes. 32,33 Resistance transfer and PCR-based replicon typing To determine the transferability of the ESBL and PMQR genes, conjugation by plate mating was performed with streptomycin-resistant E. coli HB101as the recipient and se-ven selected donors harboring bla CTX-M-15 (n = 6) or bla CTX-M-14 (n = 1). Plasmid replicons involved in the transfer of the resistance genes were analyzed by PCR-based plasmid replicon typing (PBRT) as described by Carattoli et al. 34 Molecular characterization of isolates A multiplex PCR described by Clemont et al., 35 was used to assign the E. coli isolates to a phylogenetic group. Genetic relatedness of isolates identified to carry bla CTX-M genes was analyzed by macrorestriction pulsed-field gel electrophoresis (PFGE) (www.cdc.gov/pulsenet/pathogens/). Data were analyzed using BioNumerics software version 5.1 (Applied Maths). A tolerance of 1.00% was selected and cluster analysis of PFGE pulsotypes was performed by the unweighted pair group method with average linkages (UPGMA), using the Dice coefficient to analyze similarities and define pulsotypes. PFGE pulsotypes were identified as isolates with ‡90% similarity. Multilocus sequence typing (MLST) was performed as previously described 36 for at least one isolate from each identified PFGE cluster. Bacterial isolates and antimicrobial susceptibility testing Four hundred and forty five E. coli isolates (n = 445) were obtained from companion animal clinical specimens between January 2010 and November 2011, of which 32 (7%) cefpodoxime and/or cefoxitin-resistant nonduplicate isolates (30 canine and two feline) were characterized in this study. The selected isolates were both from normally sterile sites [urine (n = 6), liver/bile (n = 4), abdominal fluid (n = 3), bronchoalveolar lavage (n = 1), lymph-node biopsy (n = 1)] and also from sites colonized with normal flora and where cultures yielded mixed bacterial growth [colon biopsies (n = 3), wounds (n = 4), skin/ear swabs (n = 5), fecal samples (n = 5)]. In addition, a cluster of environmental ESCresistant E. coli isolates (n = 6) obtained from the same clinic and during the same time period as two particular ESBL-positive clinical isolates, was also included in the study and characterized by the same methods as the clinical isolates. These two ESBL-positive clinical isolates were from the same dog, which had been admitted with septic peritonitis following duodenal ulceration; one isolate was obtained from abdominal fluid (12L-0659) and one from a surgical site wound swab (12L-0671) following surgery. The environmental ESC-resistant E. coli isolates were obtained from the ultrasound table (EBM-111) where the dog was examined and also from various areas of the ward where the dog was hospitalized; these included the kennel area, the drip pump attached to the kennel, the ward door handle, the ward fridge handle, and the ward computer keyboard (EBM-114, EBM-115, EBM-116, EBM-118, and EBM-119). All isolates characterized in this study showed resistance to ampicillin, amoxicillin-clavulanic acid (CV), cefotaxime, cefpodoxime, ceftazidime, and tetracycline. In addition, 71% of isolates exhibited resistance to cefoxitin, 65% to ciprofloxacin, and 59% to trimethoprim/sulfamethoxazole (Table 1). Interestingly, 29% of isolates showed resistance to amoxicillin-clavulanic acid, but susceptibility to cefoxitin. This may indicate that mechanisms such as those which involve combinations bla CTX-M-15 and bla OXA-1 genes (as seen in isolates 10L-4543, 11L-1298, 10L-2646) may give rise to this phenotype. Other mechanisms responsible for amoxicillin-clavulanic acid resistance, but which do not normally confer resistance to the cephamycins includes hyperproduction of TEM-1 or SHV-1 beta-lactamases. Although we did not attempt to determine whether this was the case, a number of the tested isolates carried bla TEM-1b only and were fully susceptible to cefoxitin (10L-1747, 10L-2253, 11L-2520, Table 1). The eight E. coli isolates included in this study for comparison (two clinical and six environmental) were processed in the diagnostic laboratory simultaneously and the identical susceptibility phenotypes identified in this group of isolates triggered closer investigation. In the DDST, they showed no synergy for ceftazidime and cefotaxime with CV combinations, while only a small zone of inhibition (less than 4 mm) appeared for the cefpodoxime/CV combination. All isolates were resistant to cefoxitin and this raised the possibility of the ESBL phenotype being masked by the additional presence of AmpC cephalosporinase, which is not inhibited by CV; additional testing with a cefepime/CV combination revealed the presence of ESBL phenotypes in these eight clinical/environmental E. coli isolates. Characterization of ESBL and other resistance genes Among the ESC-resistant clinical E. coli isolates, CTX-M type ESBL was the most prevalent, found in 56% (18/32) of isolates, of which 66% (12/18) of isolates harbored bla CTX-M-15 , with bla CTX-M-14 being found in five isolates (27%) and bla CTX-M-27 identified in one isolate. With the exception of one isolate, which carried bla CTX-M-15 alone, the remaining ESC-resistant clinical E. coli isolates also carried bla TEM-1 , bla CMY-2 , and/or bla OXA-1 in various combinations ( Table 1). The aac(6')-Ib-cr gene was the only PMQR gene detected, although at high prevalence (34.3%), and was associated with the carriage of bla CTX-M-15 . The eight clinical and environmental isolates that showed the common ESBL/AmpC phenotype, carried bla CTX-M-15 and also coharbored bla OXA-1, bla TEM-1, bla CMY-2, and aac(6')-Ib-cr. Specific PCR assays revealed that ISEcp1 or IS26, or in some isolates ISEcp1 disrupted by IS26, was associated with bla CTX-M-15 (Table 1). In addition, ISEcp1was associated with bla CTX-M-14 in one of the five isolates and was not found to be associated with bla CTX-M-27, findings which support the diversity of the CTX-M genetic arrangements in E. coli isolates resulting from various mobilization events. PCR also showed that bla TEM-1 , bla CMY-2, bla OXA-1, as well as aac(6')-Ib-cr , had cotransferred with the bla CTX-M-14/15 in all transconjugants, indicating that they are located on conjugative plasmids. Molecular typing of isolates Phylogenetic typing identified that 63% of E. coli isolates belonged to phylogenetic group A and the remaining isolates were typed to the more potentially pathogenic groups, B2 (21%) or group D (15%). PFGE showed clonal diversity of the CTX-M-positive isolates and five pulsotypes (PT 1, 2, 4, 6, and 7) were identified with similarity of isolates greater than 90% (Fig. 1). The main group (PT 4, n = 9) included the eight clinical and environmental isolates with the common ESBL/AmpC phenotype and interestingly, another clinical isolate from a colon biopsy obtained from a dog (11L-2603), which was admitted with diarrhea in the same clinic 7 months previously. The second main cluster (PT 6) was formed by three isolates identified to belong to the human pandemic ST131 by MLST. Interestingly, a fourth member of this clone, which carried bla CTX-M-27 , showed only a 73% similarity with the ST131 group. MLST also showed that the clinical and environmental isolates with the common ESBL/ AmpC phenotype belonged to ST410, while the next most common ST identified in our bla CTX-M isolates was ST617 (Fig. 1). Discussion This study characterized a collection of ESC-resistant E. coli and identified a high prevalence (7%) of ESC-resistant E. coli in clinical specimens from companion animals in the UK, which is considerably higher than that found in similar studies from pets in France (3.7%) or The Netherlands (2%). 3, 10 We also found a high prevalence (56%) of CTX-M type ESBL-producing E. coli from clinical specimens where 66% of clinical ESBL-producing E. coli carried bla CTX-M-15, which is among the highest reported rates in companion animals. To the best of our knowledge, higher carriage rates of bla CTX-M-15 in clinical animal isolates have only been reported in the United States where 78% of the ESBLproducing E. coli clinical isolates from companion animals were found to carry bla CTX-M- 15. 11 In Europe, 46% and 36% of canine ESBL-producing E. coli isolates (from Germany and France, respectively) carried bla CTX-M-15. 10,37 In The Netherlands, Dierikx et al., 3 found that of 29 E. coli isolates with an ESBL/AmpC phenotype from diseased dogs, cats, and horses, only five isolates (from dogs) (17%) carried bla CTX-M-15. In addition, a lower prevalence of bla CTX-M-15 was found in Switzerland, where eight of the 107 E. coli isolates obtained from canine urine samples (7.4%) were ESBLs and all carried bla CTX-M-15. 4 Furthermore, only one E. coli isolate carried this gene in a similar study in Italy and no bla CTX-M was identified in a study characterizing multidrugresistant canine urinary E coli isolates from Scotland. 38,39 E. coli carrying bla CTX-M-15 is the most common ESBL type associated with infections in humans in the United Kingdom and Europe. 6 On this basis, the high prevalence of veterinary clinical isolates carrying bla CTX-M-15 identified in this study is worrying both in the context of likely interspecies transfer (man to animals), as well as previous studies identifying animals as a potential reservoir of ESBLproducing E. coli for human infection. 40,41 In addition, the clinical and environmental isolates investigated for hospital This study demonstrated veterinary hospital dissemination of clinical E. coli ST410 isolates co-harboring bla CTX-M-15, bla TEM-1, OXA-1 or CMY-2, and acc(6')-Ib-cr, a genotype conferring MDR and often associated with human clinical isolates. [42][43][44] Following the confirmation of the ESBL/AmpC phenotype in these isolates, the laboratory contacted the veterinary hospital's infection control team, which took action by cleaning and disinfection of the areas/surfaces identified as sources of these organisms and reinforced hand hygiene policy. The environmental sampling was repeated after reinforcing cleaning and disinfection protocols and no E. coli isolates with an ESBL/AmpC phenotype were identified in the subsequent bacterial environmental surveillance specimens. This study demonstrates the role that the microbiology laboratory can play in the early detection and prevention of MDR isolate dissemination in veterinary hospitals. The presence of multiple blactamases in Gram-negative bacteria may interfere with the ESBL phenotypic confirmatory tests 45,46 and it is therefore important that veterinary diagnostic microbiology laboratories are continuously updating their detection methods to recognize ESBL, AmpC, or other emerging resistance phenotypes and to translate the therapeutic or epidemiological significance of these findings to veterinary clinicians. This study also highlights the importance of infection control programs and the benefits of environmental surveillance in the veterinary hospitals for limiting the spread of nosocomial pathogens. Furthermore, the dissemination of the ESBL/AmpC E. coli ST410 isolates from veterinary patients (probably from surgical wounds) to the environment, as shown in this study, may indicate a pattern of spread that can occur in the community, especially in the owners home, highlighting the associated human health risk. Therefore, accurate laboratory detection of ESBL/AmpC phenotypes can support the veterinary hospitals in the process of implementing policies for owner's information and infection control advice for limiting the owner's exposure and associated transmission risks. Recent EUCAST and CLSI guidelines recommend that when using the new interpretative breakpoints, routine ESBL testing is no longer necessary and reporting of susceptibility results to penicillins and cephalosporins for ESBL-producing Enterobacteriaceae should be 'as found'. 47,48 However, these new guidelines indicate that ESBL screening may still be useful for epidemiological reasons. 47,48 Our findings, demonstrating a high prevalence of CTX-M-15 ESBL-producing E. coli in clinical specimens from companion animals, as well as the dissemination of E. coli ST410 through the hospital environment, support the need for veterinary laboratories to continue ESBL screening and to continuously upgrade their expertise in detection of complex antimicrobial resistance phenotypes, to benefit both human and animal health.
2018-04-03T04:57:48.693Z
2016-10-01T00:00:00.000
{ "year": 2016, "sha1": "86349d7423ae7090a0d6bab9bcea7f1a115228ba", "oa_license": "CCBY", "oa_url": "https://www.liebertpub.com/doi/pdf/10.1089/mdr.2016.0036", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "86349d7423ae7090a0d6bab9bcea7f1a115228ba", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
1365151
pes2o/s2orc
v3-fos-license
Biochemical characterization of aminopeptidase N2 from Toxoplasma gondii Aminopeptidase N (APN) is a member of the highly conserved M1 family of metalloproteases, and is considered to be a valuable target for the treatment of a variety of diseases, e.g., cancer, malaria, and coccidiosis. In this study, we identified an APN gene (TgAPN2) in the Toxoplasma gondii genome, and performed a biochemical characterization of the recombinant TgAPN2 (rTgAPN2) protein. Active rTgAPN2 was first produced and purified in Escherichia coli. The catalytic activity of the enzyme was verified using a specific fluorescent substrate, H-Ala-MCA; the rTgAPN2 was relatively active in the absence of added metal ions. The addition of some metal ions, especially Zn2+, inhibited the activity of the recombinant enzyme. The activity of rTgAPN2 was reduced in the presence of the EDTA chelator in the absence of added metal ions. The optimum pH for enzyme activity was 8.0; the enzyme was active in the 3–10 pH range. The substrate preference of rTgAPN2 was evaluated. The enzyme showed a preference for substrates containing N-terminal Ala and Arg residues. Finally, bestatin and amastatin were shown to inhibit the activity of the enzyme. In conclusion, rTgAPN2 shared general characteristics with the M1 family of aminopeptidases but also had some unique characteristics. This provides a basis for the function of aminopeptidases and the study of drug targets. extracellular matrix, thereby promoting tumor cell infiltration and metastasis [12].It can thus be imagined that the study of toxoplasma proteins would not only benefit the development of anti-toxoplasmosis drugs, but may also help to better understand the origin of cancer. According to the previous report [10] and sequence alignment in the ToxoDB, there are 3 APNs in T. gondii genome that belong to the M1 metalloprotease family (TgAPN1, TgAPN2 and TgAPN3).To date, no data are available on the biochemical activity characteristics of TgAPNs.Recent analysis of the T. gondii genome led to the identification of a gene encoding a new putative APN-like protease (TgAPN2) that belongs to the M1 metalloprotease family.In this study, this is the first report on the enzymatic activity of a recombinant TgAPN2 protein toward synthetic aminopeptidase substrates.Studying the catalytic properties of the enzyme may facilitate the understanding of biochemical characteristics of the M1 family metalloproteinases, and instigate the research and development of APN-targeting anti-toxoplasmosis drugs or other drugs. Parasite strains and growth conditions Tachyzoites of T. gondii RH strain were maintained in human foreskin fibroblast (HFFs) cells or Vero cells cultured in a Dulbecco's minimum essential medium (DMEM; GIBCO, Invitrogen, San Diego, CA, U.S.A.) supplemented with 8% heatinactivated fetal bovine serum (Invitrogen) and 1% penicillin/streptomycin at 37°C in a 95% air/5% CO 2 environment.To purify T. gondii tachyzoites, parasites and host cells were washed in cold phosphate-buffered saline (PBS).The final pellet was resuspended in cold PBS and passed three times, and then passed through a 27-gauge needle.The parasites were finally passed through 5.0 µmpore filters (Millipore, Bedford, MA, U.S.A.) and stored at −80°C until use. Sequence alignment and construction of a phylogenetic tree The signal peptide of TgAPN2 was predicted with the SignalP algorithm, and conserved protein domains were identified using the PFAM algorithm.The amino acid sequences of the following APN family members and several selected members of the M1 zinc metallopeptidase family were aligned using the BLAST program (GenBank accession numbers are indicated in parentheses): T. gondii (XP_008881821), P. falciparum (XP_001349846.1),Neospora caninum (XP_003884424.1), and Homo sapiens (NP_001141.2).The three-dimensional protein structure of TgAPN2 was predicted with the SWISS-MODEL (https://swissmodel.expasy.org/)[5].The phylogenetic tree was computed using the MEGA (version 5) program with a Muscle alignment and the neighbor-joining method.Bootstrap values were calculated after resampling 10,000 times.The predicted APN sequences of T. gondii (TGGT1_224350A), Cytauxzoon felis (CF002488), Eimeria acervuline (EAH_00017220), Eimeria maxima (EMWEY_00046640), Babesia bigeminy (BBBOND_0208790), and N. caninum were obtained from the ToxoDB database (http:// toxodb.org/toxo/).The sequences of E. tenella, P. falciparum, and H. sapiens enzymes were obtained from the NCBI protein database. Expression of rTgAPN2 in Escherichia coli TgAPN2 cDNA was amplified using the following primers: pET30aAPNFwd, 5′-GCTGATATCGGATCCGAATTCAAACACCGCCTCGACTATAA-3′, and pET30aAPNRev, 5′-TTGTCGACGGAGCTCGAATTCGGCGGTCCGTTCGGGTCCTT-3′ (EcoR I sites are underlined).The PCR product with a 5′-terminal His-encoding sequence was cloned into the pET30a vector (Takara, Dalian, China).Verified plasmids were used to transform E. coli BL21 strain.Liquid culture of transformed E. coli cells (1 l) was grown at 37°C until OD 600 reached 0.5; the temperature was then reduced to 22°C, and protein synthesis was induced with isopropyl-β-d-galactoside (IPTG; 1.0 mM final concentration).After centrifugation, E. coli cells were resuspended in cold PBS and lysed by ultrasonic treatment.Purification of rTgAPN2 was performed using the nickel-nitriloacetic acid (NTA)-agarose (Qiagen, Valencia, CA, U.S.A.) according to the manufacturer's protocol.Briefly, cleared cell lysates were prepared in a buffer containing 50 mM NaH 2 PO 4 , 300 mM NaCl, 20 mM imidazole, 1 mg/ml lysozyme, 1 mM Na 3 VO 4 , 40 mM NaF, 100 µM leupeptin, 1 mM 4-(2-aminoethyl)-benzenesulfonyl fluoride, and 0.05 TIU/ml aprotinin (pH 8.0); they were combined with 1 ml of nickel-NTA-agarose, rotated for 1 hr at 4°C, and then transferred onto a chromatography column.The rTgAPN2 nickel-NTA-agarose complexes were washed three times with 50 mM imidazole wash buffer (50 mM NaH 2 PO 4 , 300 mM NaCl, and 50 mM imidazole; pH 8.0) and eluted with 250 mM imidazole elution buffer (50 mM NaH 2 PO 4 , 300 mM NaCl, and 250 mM imidazole; pH 8.0).The His-Glutathione S-transferase (His-GST) nickel-NTA-agarose complex was purified using the same methods.His-GST and rTgAPN2 elution fractions were analyzed by sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) to determine the appropriate fractions for pooling.For SDS-polyacrylamide gel electrophoresis (SDS-PAGE), the samples were mixed with an equal volume of sample buffer containing 10% beta-mercaptoethanol (5% final concentration) and thenboiled for 10 min.Proteins (20 µg/lane) were run on 10% polyacrylamide slab gels and once the electrophoresis is over, SDS-PAGE gel will be stained in Coomassie Stain Solution for 1 hr.Finally, SDS-PAGE Destain Solution is used to destain Coomassie dye from the gel and the gel would be imaged.Proteins were dialyzed in a buffer containing 50 mM HEPES, 150 mM NaCl, and 10% glycerol, overnight at 4°C, and then aliquoted, and used biochemical assays.The concentration of purified rTgAPN2 was determined using the BCA Protein Assay Kit (Thermo Fisher Scientific, Schwerte, Germany).doi: 10.1292/jvms.17-0119 Enzyme activity assay Following the purification, protein activity was evaluated.The enzymatic activity of rTgAPN2 was determined with H-Ala-(4methyl-coumaryl-7-amide) (H-Ala-MCA) as a substrate, using an EnSpire Multimode Plate Reader (PerkinElmer) at 355-460 nm.Different concentrations of the purified protein were added to 50 mM Tris-HCl buffer (pH 8.0) and pre-heated at 37°C for 20 min, and then the substrate (0.1 mM final concentration) was added.As the substrate was released, the fluorescence increased.All substrates were completely enzymatic, the fluorescence value reached the highest, the curve reached the plateau.Reactions containing only the GST were used as a negative control. The effect of pH on enzyme activity The pH of 50 mM Tris-HCl buffer was adjusted to different values.The specified amount of purified protein was added to the different pH buffers and pre-heated at 37°C for 20 min, and then the substrate H-Ala-MCA (0.1 mM final concentration) was added.The initial reaction velocities at different pH values were then measured and calculated. The effect of metal ions on enzyme activity The cation sensitivity was investigated by assaying the activity of rTgAPN2 after pre-incubation at 37°C for 30 min in 50 mM Tris-HCl (pH 8.0) supplemented with a metal chloride (Co 2+ , Mn 2+ , Fe 2+ , Ni 2+ , Mg 2+ , Ca 2+ , Cu 2+ , or Zn 2+ ; Sigma-Aldrich); the substrate was H-Ala-MCA (0.1 mM final concentration).The initial reaction velocities at different metal ions were then measured and calculated.To test the effect of metal ion chelation by EDTA on enzyme activity, the enzyme was pre-incubated with different concentrations of EDTA (0.5, 1, or 10 mM) for 30 min at 37°C before the substrate (H-Ala-MCA) was added. The rTgAPN2 substrate fingerprint and determination of kinetic parameters The substrate preference and K m , k cat , and k cat /K m (K m , the Michaelis constant; k cat , the catalytic constant; k cat /K m , the secondorder rate constant) values were determined as previously described [21].Briefly, rTgAPN2 substrate specificity was assayed using a library of MCA substrates.rTgAPN2 activity was assayed in 50 mM Tris-HCl buffer (pH 8.0) at 37°C.The enzyme and substrate concentrations were 0.1 mM.The peptidase activity was monitored at 355-460 nm, and each kinetic assay was repeated three times.The average values and standard deviations (SD) were calculated. Inhibition assay To determine the inhibitory effect of bestatin and amastatin on peptidase activity, rTgAPN2 was pre-incubated with bestatin or amastatin for 30 min at 37°C before the addition of the fluorogenic substrate H-Ala-MCA.Relative enzyme inhibition was assessed with different concentrations of bestatin or amastatin. Identification and sequence analysis of the TgAPN2 gene The full-length TgAPN2 cDNA contains an open reading frame of 3,659 bp; the gene resides on chromosome X.An "HEXXH" motif (residues 529-922) and a DUF3485 sequence motif (residues 997-1217) were identified indeduced protein sequence using the Pfam algorithm in the SMART program (http://smart.embl-heidelberg.de/)and SWISS-MODEL program, respectively (Fig. 1A).In these enzymes HEXXH, where the Glu residues act to coordinate the catalytic zinc ion and the first glutamic acid residue is proposed to be one of the putative nucleophiles participating in the enzymatic reaction (Fig. 1B and 1C).Multiplesequence alignment of the active sites of the M1 protease family members revealed that TgAPN2 contains the functional domains of the M1 protease family, "GAME", and HEXXH (Fig. 1D); the generated recombinant protein contained these two domains as well (Fig. S1). These results indicated that the amino acid sequence of TgAPN2 was similar to homohexameric APN proteins, such as APN from N. caninum (44.9% identity) or Sarcocystisneurona (34.5% identity); however, the homology between TgAPN2 and PfA-M1 was relatively low, only 20% (Fig. 1E). Expression of rTgAPN2 rTgAPN2 protein containing an N-terminal His-tag was expressed as expected.The molecular mass of rTgAPN2 was 85 kDa, as assessed by SDS-PAGE.His-GST protein was used as a control (Fig. 2A). The activity of rTgAPN2 Previous studies demonstrated that APNs have a strict preference for Ala as the N-terminal amino acid residue.The activity of purified rTgAPN2 was therefore tested using a specific fluorescent substrate, H-Ala-MCA.The purified protein exhibited a relatively catalytic activity in 50 mM Tris-HCl (pH 8.0).The negative control was GST protein (Fig. 2B).Under the same conditions, the reaction catalyzed by rTgAPN2 exhibited the Michaelis-Menten kinetics with substrates H-Ala-MCA and H-Arg-MCA (Fig. 2C and 2D). The metal ion dependence of enzyme activity The effect of divalent metal ions on enzyme activity was next evaluated.H-Ala-MCA was used as a standard substrate to examine the protein's dependence on metal ions; the enzyme activity was inhibited by the addition of the bivalent metal cations Co 2+ , Mn 2+ , Fe 2+ , Ni 2+ , Mg 2+ , Ca 2+ , Cu 2+ and Zn 2+ (Fig. 3A).To test whether a higher concentration of metal ions would further inhibit the activity of rTgAPN2, different concentration of Zn 2+ ions were used in the reactions.The experiment revealed that the activity of the enzyme somewhat recovered with a decreasing concentration of metal ions; however, low Zn 2+ concentration still significantly (P<0.01)inhibited the activity of the enzyme (Fig. 3B and 3C).EDTA competitively chelates divalent metal ions and inhibits the activity of some metal ion-dependent enzymes, e.g., human-APN (CD13) [14] and EtAPN [7].We speculated that the rTgAPN2 might have been activated by metal ions during production in the heterologous (prokaryotic) system.Therefore, the ability of EDTA to chelate the metal ions and inhibit rTgAPN2 activity was next examined.Different concentrations of EDTA were added to the reaction mixture; 1 mM EDTA significantly inhibited the activity of the recombinant protein compared with the untreated group (P<0.05) (Fig. 3D). The pH dependence of rTgAPN2 activity Next, the catalytic activity of the recombinant protein was evaluated at different pH values.The activity of rTgAPN2 was tested in 50 mM Tris-HCl buffers with the pH adjusted from 1 to11; the optimum enzyme activity was observed at pH 8.0.The enzyme also retained some activity at pH 3.0 (Fig. 4A).This indicated that rTgAPN2 is active over a wide range of pH values. Substrate specificity and enzyme kinetics of rTgAPN2 To characterize the enzymatic properties of rTgAPN2, its activity was measured in fluorescence assays with various peptidase substrates (Fig. 4B).The initial reaction rate of rTgAPN2 with H-Ala-MCA was first calculated.Then, an rTgAPN2 fingerprint for MCA substrate specificity in comparison with H-Ala-MCA was obtained using a library of synthetic substrates.The preferred amino acids were Ala and Arg.rTgAPN2 also accommodated Gly, Leu, Phe, and Pro, while H-Lys-MCA and H-Trp-MCA substrates were cleaved at lower rates (Figs.4B and S2).The kinetic parameters of rTgAPN2 were determined using a panel of selected substrates (Table 1). The inhibitors of rTgAPN2 Inhibition assays were then performed with the purified rTgAPN2 protein.The activity of rTgAPN2 was reduced in a dose- dependent manner in the presence of bestatin or amastatin.When the same concentrations of inhibitors were tested, the inhibitory effect of amastatin appeared to be slightly stronger than the inhibitory effect of bestatin (Fig. 4C and 4D). DISCUSSION APN is a member of the M1 family of metalloproteases, which hydrolyzes N-terminal neutral (e.g., leucine and alanine) or basic (e.g., arginine and lysine) amino acid residues in oligopeptides or polypeptides.Such substrate diversity indicates a wide range of biological functions of APNs. In this study, a novel T. gondii aminopeptidase belonging to the M1 metalloproteinase family was produced and purified in E. coli, and its biochemical characteristics were evaluated in a series of experiments.Although it was difficult to produce the fulllength protein in E. coli, the enzymaticallyactive region of TgAPN2 was analyzed and successfully produced as a recombinant protein; the activity of the recombinant protein was then verified. Based on protein sequence analysis, TgAPN2 contains a highly-conserved ion-binding motif and a catalytic motif [1].These two motifs are characteristic for all M1 family aminopeptidases.Most APNs are thought to be zinc-dependent metalloproteinases [2,25].We previously produced a family of metalloproteinases in E. coli, demonstrating that these proteases were metal-dependent [11,29]; i.e., recombinant aspartyl aminopeptidase (rTgAAP) and leucyl aminopeptidase (rTgLAP) of the T. gondii had some activity in the absence of added metal ions, and the addition of metal ions improved their activity.As shown in the current study, TgAPN2 is somewhat different from rTgAAP and rTgLAP.Purified rTgAPN2, produced in E. coli, did not require the addition of metal ions for activity and the addition of low concentration of zinc ions inhibited its activity.Some Zn 2+ metalloproteases acquire zinc ions during process of production in prokaryotic systems, and a subsequent addition of zinc ions significantly inhibits their activity [13].We hypothesize that rTgAPN2 may also bind Zn 2+ metal ions during prokaryotic expression and purification.If that were the case, the enzymatic activity would be decreased after the addition of a metal chelator, EDTA.Indeed, EDTA inhibited the activity of rTgAPN2.Thus, TgAPN2 is likely to be a Zn 2+ metalloprotease from the M1 family.Protein expression in eukaryotic systems, e.g., yeast, might yield different results.We are planning to explore the activity in follow-up experiments. PfA-M1 is essential for protein metabolism and plays an important role in the digestion of host hemoglobin into free amino acids [16].PfA-M1 exists in three soluble forms, p120, p96 and p68, localized in different parts of the Plasmodium cell.Protein p68 is marginally delivered into the food vacuole at trophozoite stages, but is not in a direct contact with the parasite cytoplasm [4].In fact, the intravacuolar pH of the food vacuole can decline from 7 to 3 [8], and studies show that the optimal pH for Plasmodium falciparum APN activity is 7.4.We found that rTgAPN2 was active in the 3-10 pH range, suggesting that M1 family aminopeptidases share some common features. PfA-M1 is used as an exciting target for novel anti-malarial drugs [24].Unlike P. falciparum, there are three APN-encoding genes in T. gondii genome.These three genes that belong to the M1 family aminopeptidases are located on different chromosomes of T. gondii, named APN1, APN2 and APN3 according to the previous report [10], respectively.So, they may or not have a common effect.Therefore, to develop anti-Toxoplasma drugs, it is important to understand the functions of each TgAPN.The current study demonstrated that the short-segmented rTgAPN2 protein retained appreciable enzymatic activity, with a broad substrate specificity for unnatural amino acids.Ala is considered to be the most suitable substrate for the M1 family aminopeptidases in P. falciparum [6].Here, similar results were obtained with the T. gondii enzyme, with rTgAPN2 being highly active toward H-Ala-MCA; however, the activity of rTgAPN2 was even higher with H-Arg-MCA.By contrast, the activity of E. tenella aminopeptidase with H-Ala-MCA and H-Arg-MCA is substantially the same [7].This suggests that the substrate preference of TgAPN2 may have its own characteristics. The involvement of aminopeptidase activities during Plasmodium species development was initially illustrated using bestatin [17], a well-known broad-spectrum inhibitor of aminopeptidases.And E. tenella aminopeptidase N 1 (EtAPN1) is likely the most significant target of bestatin in E. tenella sporozoites, but for rEtAPN1, the inhibitory effect of amastatin was higher than that of bestatin [10].This phenomenon is also reflected by our experimental results: the inhibitory effect of bestatin on rTgAPN2 was dose-dependent, bestatin did not completely inhibit rTgAPN2, and the inhibitory effect of amastatin was to some extent greater than that of bestatin.For human Aminopeptidase N (APN/CD13), An IC50 of 9.0 mM was determined for amastatin which is higher than that of bestatin (IC50=16.9mM) [22,26]. We propose that TgAPN2 is also involved in parasite development and plays a role similar to that of PfA-M1.But it is noteworthy that Plasmodium and Cryptosporidium, in contrast to Toxoplasma and Eimeria, have only one aminopeptidase enzyme of the M1 family.Complementary phylogenetic analysis should be carried out to determine if gene duplication occurred in these parasites.Given the above, to understand the biochemical characteristics of TgAPN2, to be more targeted to design anti-Toxoplasma drug target. Fig. 2 . Fig. 2. The expression of rTgAPN2 and verification of its activity.(A) SDS-PAGE analysis of the purified rTgAPN2.The molecular mass of rTgAPN2 with a His-tag was 85 kDa.GST was used as a control.(B) The catalytic activity of rTgAPN2.The activity of purified rTgAPN2 was evaluated using a specific fluorescent substrate H-Ala-MCA.GST was used as a negative control.The plateau stage indicates a complete release of the substrate.(C) & (D) Michaelis-Menten enzyme kinetics.Enzymatic activity of purified rTgAPN2.An initial assay with different concentrations of the fluorogenic peptide substrates H-Ala-MCA and H-Arg-MCA is shown.One unit of activity is defined as the amount of MCA (pM) released per mg of recombinant protein.Data points indicate the mean activity ± SD (n=3). Fig. 3 . Fig. 3. Restoration of rTgAPN2 activity by divalent metal ions.rTgAPN2 was pre-incubated at 37°C for 30 min in 50 mM Tris-HCl supplemented with the specified metal chloride, and then the substrate (H-Ala-MCA, 0.1 mM final concentration) was added.(A) Enzyme activity in the presence of 1 µM metal ions.(B) Enzyme activity in the presence of 0.1 µM metal ions.(C) Different concentration of Zn 2+ ions.(D) Enzyme activity in the presence of EDTA.The differences between samples were evaluated by Student's t-test (n=3).*P<0.05,**P<0.01. Fig. 4 . Fig. 4. (A) pH dependence of rTgAPN2.Optimum enzyme activity was observed at pH 8.0.(B) Substrate specificity of rTgAPN2.One unit of activity is defined as the amount of MCA (pM) released per mg of recombinant protein.Relative activity is expressed as a percentage of activity with H-Ala-MCA.(C) The activity of rTgAPN2 is inhibited by bestatin and amastatin.rTgAPN2 was incubated with different concentrations of either bestatin or amastatin for 30 min before the addition of H-Ala-MCA.The residual enzyme activity was recorded and expressed as a percentage of the activity of enzyme incubated in the absence of inhibitors.The inhibitory effect of bestatin and amastatin on the aminopeptidase activity was dose-dependent. Table 1 . Kinetic parameters for the hydrolysis of peptide substrates by rTgAPN2
2018-04-03T03:37:05.130Z
2017-07-13T00:00:00.000
{ "year": 2017, "sha1": "c326ae39211625b0812cad1f941d567b3adb0e26", "oa_license": "CCBYNCND", "oa_url": "https://www.jstage.jst.go.jp/article/jvms/79/8/79_17-0119/_pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "c326ae39211625b0812cad1f941d567b3adb0e26", "s2fieldsofstudy": [ "Biology", "Chemistry", "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
243027072
pes2o/s2orc
v3-fos-license
Eco-geographic study of Mahaleb ( Prunus mahaleb. L) in the middle 1 and northern parts of the eastern Mediterranean 2 5 Background: Mahaleb still exists in most of the eastern Mediterranean 6 forests associated with Cilician fir (Abies cilicica) and Lebanon cedar 7 (Cedrus libani). However, there is an importance of conservation of its 8 germplasm in hereditary banks due to their degradation in natural habitats, 9 as well as there is growing interest in expanding Mahaleb cultivation due to 10 its low requirements and endurance of harsh environments. 11 Methods : The study used the approaches of the autecology concepts to study 12 Mahaleb in situ. The field surveys have been conducted on an investigated 13 homogeneous area of about 100m 2 to 400m 2 as a (relevé area). 14 Results: Mahaleb occurs in its habitat in isolated individuals form and 15 fragile structures of populations that were largely believed to have been in 16 clumped or linear populations. The spatial distribution is restricted to small 17 isolated zones in half-open, treeless or rocky outcrops areas of deciduous 18 forests or rugged areas of barren mountains. The root sprouting seems to be 19 the dominant mode of recruitment. However, all sites showed missing age 20 classes that may indicate human infringement or the failure of recruitment in 21 some years. The spatial distribution showed that Mahaleb exists in different 22 environmental and climatic conditions regarding soil, landscape, rainfall, 23 temperature. This can be attributed to its possession of genetic capabilities 24 that enable it to adapt to varying environmental conditions in addition to the 25 presence of different genotypes or higher taxa such as subspecies or even it 26 may reflect the differences of environmental resilience inside some species 27 themselves. 28 Conclusions: this reflecting Mahaleb's high ability to withstand 29 environmental, thermal, and water stresses. Notable, strong, long roots were 30 found at different depths of soils, some within the joints of the rocks, and 31 this strengthens its role in protecting soil conservation. The geo-distribution 32 of Mahaleb suggests different genotypes or higher taxa such as subspecies 33 or even the differences of environmental resilience inside some species 34 themselves. 35 It is also necessary to predict new potential areas for growth Mahaleb in the 36 eastern Mediterranean to increase production either by introducing its 37 cultivation in unconventional areas or by enhancing its productivity in the 38 areas currently cultivated, which appears to be an important issue soon. 39 Keywords: Prunus mahaleb,drought,genetic erosion,40 Mediterranean. 41 * Correspondence: hussamhmhusien@gmail.com 42 Background 43 East Mediterranean region and Western Asia countries are considered the 44 origin habitat of Mahaleb (Prunus mahaleb L., Cerasus mahaleb L. Mill., 45 mahleb cherry or St. Lucie cherry EN., mahlab Arb.) (Zohary 1962;Ruiz, 46 1989;Rallo 1995;Scholz and Scholz 1995;Blanca and Diaz 1999;Katzer 47 2002). Besides, it is adjudged to be native in northwestern Europe or at 48 least it is naturalized there (National Research Council 1991). The 49 occurrence of small and spatially isolated populations in Switzerland forms 50 the northern range edge that can Mahaleb reaches (Kollmann and 51 Pflugshaupt 2005). The isolation of these old rare species is often the result 52 of environmental change (Huenneke 1991) such as climate cooling that 53 followed the warmest time of the post-glacial period (Burga and Perret 54 1998;Kollmann and Pflugshaupt 2005). Many studies consider Mahaleb 55 to be one of the ancestors of cherry, where in many countries; it is used as 56 seedling rootstock to grafting sweet cherry (P. avium L.) and sour cherry 57 (P. ceracus L.). It is considered a strong rootstock due to its tolerance to 58 drought and high calcium carbonate content in the soil (Nabulsi 2004); it 59 is also found in most well-drained soils (Guitian 1993) and poor soils on 60 open rocky slopes as well as in sunny or partially sunny places (Bean 1981;61 Huxley 1992). Socias (1996) were between its wild forms, the weight of the fruit, the length of the leaf 70 neck, and the leaf area index. 71 The wild condition exists in most of the eastern Mediterranean forests 72 associated with Cilician fir (Abies cilicica) and Lebanon cedar (Cedrus 73 libani), on 2000 m elevation (Mouterde 1970, Barkuda andAudat 1983;74 Barkuda et al. 2002). Currently, there is growing interest in expanding 75 Mahaleb cultivation in promising agricultural areas due to its low 76 agricultural requirements and endurance of harsh environments. For 77 instance, in Syria there are 5737 hectares of cultivated land containing 78 around 1.3 million trees most of which are not in the fruiting stage. 79 However, the production of kernels around 25 tons annually ranking Syria 80 in the advanced position (The annual agricultural statistical abstract 2016). 81 In addition to the economic feasibility of cultivating where farmers found 82 an economic benefit from cultivating it because of the demand for it for its 83 nutritional and medical value and for being part of many industries. Its 84 kernel oil contained a high level of polyunsaturated fatty acids especially, 85 α-eleostearic, which is a conjugated fatty acid rarely found in vegetable 86 (Sbihi et al., 2014) and for the future, it may be important for clinical 87 nutrition and the food and pharmaceutical industries (Özçelik et al. 2012). 88 Phenotype differences of Mahaleb were studied in Turkey and Italy as a 89 valuable genetic material for seed breeding programs (Gass 1996); several 90 clones of drought and carbonates tolerance clones were selected for arid 91 calcareous soils (Baumann 1977;Giorgio et al. 1992;Giorgio and 92 Standardi 1993). The Eastern Mediterranean where is characterized by 93 historical degradation as a result of habitat damage due to frequent fires, 94 wood extraction, and overgrazing. In particular, Mahaleb is suffering from 95 tremendous depletion in genetic resources in their origin habitats (Nabulsi 96 2004). However, Tawaklna et al. (2011) found and studied 22 phenotypes 97 of wild mahaleb in Syria. 98 This study aims to know the autecology of Mahaleb and to study its 99 landscape ecology in different ecosystems in the middle and northern parts 100 of the eastern facade of the Mediterranean. 101 Study area 103 A comprehensive field survey and a spatial investigation were carried out 104 on the locations where wild or cultivated Mahaleb exist in both the natural 105 forests and some mountain areas where the remnants of the perennial wild 106 trees still grow, in order to study Mahaleb in its natural habitats. As well 107 as, in some sites that are planted and irrigated by local farmers. Information 108 regarding the locations was retrieved from a variety of sources: available 109 documents and literature, official statistics from the National Statistical 110 Agencies, and the local inhabitants ( Fig. 1). 111 Survey methodology was adopted according to Maxted (1997), where the 115 location of the Mahaleb in each study area was initially investigated. 116 Homogeneous areas of about 100m 2 to 400m 2 were selected as (relevé 117 area) and the following parameters were recorded and studied (Chalabi, 118 1980;Sankary 1988 that is defined as follows (Braun-Blanquet 1928;1964;Whittaker 1973;132 Mueller-Dombois and Ellenberg 1974;Chalabi 1980;Nader 1985): 133 5: the species covers more than 3/4 of the relevé area (more than 75%). 134 4: covers from 1/2 to 3/4 the relevé area. 135 3: covers from 1/4 to 1/2 the relevé area. Soil samples were taken from topsoil (depth of 0 to 30 cm) for laboratory 144 analysis, where they were air-dried and then mashed, and then the parts> 145 2 mm were sifted in the sieve, and then conducted on the parts with a 146 diameter> 2 mm, the following physical and chemical analyses: 147 The particle-size analysis was performed by the hydrometer method with 148 the application of sodium-hexametaphosphate (Na6P6O18) as a chemical 149 dispersion agent, Soil Survey Division Staff (1993). 150 The Walkley-Black method (1934), modified by Nelson and Sommers 151 (1982) was used to determine the soil organic matter. 152 Electrical Conductivity (EC) was measured in the suspension of H2O (1:2), 153 Soil Conservation Service (1992). 154 Soil reaction (pH) was measured in the suspension of H2O (1:1), Soil 155 Conservation Service (1992). 156 Total Nitrogen was estimated by (Kjeldahl 1883) and McRae (1988). 157 Total potassium was estimated by (Jackson 1956). 158 Available phosphorus was estimated by (Olsen 1954). 159 Calcium carbonate content was determined by (Balázs et al. 2005 Quarterly precipitation pattern, Seasonal Trend (K), Precipitation 174 Covariance Variance (C.V). 175 The pluviothermic quotient of Emberger (1955) and Daget (1977) was 176 used to determine the bioclimate and variant of each study site. where: Q the pluviothermic quotient, P is the average annual precipitation 179 in mm, M is the mean of the maximal temperature of the hottest month 180 in ᴏ C (degree absolute) and m is the mean of the minimal temperature of 181 the coldest month ᴏ C (degree absolute). 182 According to the Q2 values, five categories of humidity could be 183 distinguished (Table 1). 184 As the role of the minimum temperature of species distribution has been 187 pointed out by Larcher (1983) and Woodward (1987). Therefore, (Quézel 188 et al. 1985;Daget et al. 1988; Barbero et al. 1992) have suggested winter 189 variants according to the values of m (Table 2). 190 Sankary (1988) (Table 3). 205 Aridity Index estimated using the calculation of the degree of continentally 206 (Gorczynski 1922;Abbas 1990 To develop a digital map of Mahaleb distribution, a digital database was 214 established using GIS (Hijmans et al. 2005). The data were analyzed 215 statistically using the Statistical Analyses System (SAS). 216 217 The results of the eco-geographic survey confirmed that Mahaleb is present 218 where the EU-Mediterranean climate prevails. Its occurrence was 219 monitored in the middle and northern parts of the eastern facade of the 220 Mediterranean. In six locations where its were observed in wild conditions, 221 and in four sites it was in cultivated conditions. However, it is disappeared 222 completely from some locations, where it was strongly believed to exist. 223 The topographical features of the sites of Mahaleb diffusion varied, as they 224 appeared on steep slopes, between rocks, and in the flat agricultural plains, 225 associated with a variety of plants. The following is a brief description of 226 Physical and chemical properties of the soil of the studied sites: 228 Soil textures varied from sandy loam (Lsr7), sandy clay loam (Lse1), clay 229 loamy in both (SD) and (Hj), and clay in the rest of the locations (Table 3). 230 Table 3 Soil particle size distribution. 231 The chemical and fertility properties of the soils in which Mahaleb is 232 occurring have varied widely (Table 4). 233 Table 4 Chemical analysis of study sites soils. 234 Results of the climatic study of the studied sites: 235 The results showed that the average prevailing temperature in the sites 236 ranged between 7.78 and 34.8 °C, while the (WTI) ranged between glacial 237 to cold, and the (WI) between glacial winter to a cold winter (Table 5). 238 Table 5 WTI and WI for the study sites. 239 The annual rainfall ranging from 257.7 to 1425.1 mm, with (C.V) between 240 0.223 and 0.37 and standard error is 126.6. The seasonal pattern of 241 precipitation is winter-spring-autumn-summer (Table 6). 242 Table 6 The character of rainfall of the study sites. 243 According to the pluviothermic quotient of Emberger, Mahaleb occurs in 244 bioclimatic stages from humid cold to semi-arid fresh with frequent to 245 occasional frost frequency (F.F) (Fig. 2). 246 Fig. 2. The distribution of study sites on Emberger's climagram. 247 The indicators of the drought showed that the thermal average of Mahaleb 248 ranges between 12 and 19. 249 The degree of continentally that can serve as aridity index ranges between 250 24 and 38.08; i.e. from continental to semi-continental to coastal with a 251 range exceeding 14. Spatial continental values ranged from 49.05 at (REs) 252 to 14. 49 at (Lse1) ( Table 7). 253 Table 7 The mean thermal and continental mean values for each studied 254 genotypes that reflects various environmental conditions (Vivero et al. 279 2001). Although this occurrence is not widespread, it supports previous 280 studies that confirmed that some countries of western Asia such as Syria, 281 Turkey, Iran, Iraq, and Lebanon are also the original homeland of Mahaleb. 282 Where can be found in wild conditions in the forest and mountainous areas 283 (Mouterde 1970;Chalabi 1980;Nahal et al. 1989;Ghazal 1994;Ghazal 284 Asswad 1998;Chikhali 2000). No individuals of Mahaleb were observed 285 in Orontes plain and in Jisr al-Shughur contrary to what some previous 286 studies indicated (Mouterde 1970;Barkoudeh and Audat 1983). Besides, 287 its presence was very rare in the Anti-Lebanon mountain range and the 288 Qalamoun Mountains, where cherry cultivation abounds in abundance, and 289 this may be due to the use of wild trees as assets for grafting cherries. This 290 illustrates the extent of the genetic erosion to which Mahaleb was exposed 291 by human activities such as changing the agricultural system in its natural 292 habitats, as well as the overgrazing and logging of old trees and seedlings 293 alike. 294 Soil data indicates that the growth of Mahaleb occurs in soils of various 295 textures, and this is in line with the findings of each of Bean (1981) and 296 Huxley (1992). The degree of soil interaction (pH) appeared different, with 297 a range of 1.64. This indicates the resilience of Mahaleb towards the soil 298 pH, where it was found in soils of different pH, ranging from a slightly 299 acid (SD) to a moderately alkaline (Im), this corresponds to what Bean 300 (1981) and Huxley (1992) indicated that it favors slightly acid soils and 301 suffering from chlorosis in the soil of moderately acid or higher. Soil 302 salinity was low in most locations, while the most prominent variation was 303 in the calcium carbonate that directly affects the mobility of trace elements 304 in the soil, especially iron. Mahaleb has shown some resilience indicators 305 as it tolerates high levels of calcium in the soil, whether in the seedling 306 phase or whole trees. Where the calcium carbonate ratios in the soil of the 307 studied sites ranged between 4.38% at the site (SD) and 42.08% at the site 308 (Ak3) between 4.38% at the site (SD) and 42.08% at the site (Ak3). The 309 percentage of organic matter content in the studied site soil varied. The 310 highest was in the site (Lse1), (REw), and (Lsr7) with values of 4.79%, 311 4.52%, and 4.42%, respectively, while the lowest was in the site (Ak3) 312 reached 0.57%. This indicates that it is of low nutritional requirement and 313 grows in both fertile and poor soils. Site soil was characterized by a 314 significant total nitrogen content; the maximum was 1.01% in (Lsr7). The 315 content of available phosphorus was rated as good; (Sd) was the heights 316 with (55.1 mg.kg -1 ). As well as, good content of available potash, the 317 maximum was (428.2 mg.kg -1 ) in (Rg). However, the site (Ak3) was poor 318 with available potash (126.6 mg.kg -1 ). Notable, strong, long root systems 319 were observed growing in soils of different depths, even within the joints 320 of the rocks, and this strengthens its role in protecting the soil from water 321 erosion. According to the pluviothermic quotient of Emberger, it can be 322 concluded that Mahaleb is one of the plants that occur in a different 323 bioclimatic stage where the pluviothermic (Q2) accedes 34. The indicators 324 of the drought showed that the thermal average of Mahaleb ranges between 325 12 and 19. This suggests that the biological zero of these variates ranges 326 between 10 and 20 °C. The degree of continentally expresses a high ability 327 of (tree and seedling) to withstand environmental, thermal, and water 328 stresses. 329 The variety of soil properties and diversity of climate parameters where 330 Mahaleb occurs indicates that this geo-environmental diversity may be 331 reflected in the presence of different genotypes or higher taxa such as 332 subspecies or even it may reflect the differences of environmental 333 resilience inside some species themselves. 334 The only study that was implemented by Tawaklna et al. (2011), where 22 335 wild phenotypes were identified and described, six of them which are 336 superior in the morphological characterization were selected. Further 337 studies must build on the same study, even though it did not rely on a sound 338 approach to the description of Mahaleb. Rather, it relied on the 339 methodology for the description of cherries approved by the International 340 Plant Genetic Resources Institute (IPGRI) because the description of 341 Mahaleb simply does not exist yet. Moreover, the study did not find 342 possible correlations between phenotypes and eco-geographic conditions. 343 344 Mahaleb in the Mediterranean has been present since ancient times where 345 its seeds were still used in nutrition and industry. It is used as a rootstock 346 of cherry trees due to the strength of its roots and its tolerance to drought 347 and the high carbonation in the soil. It can be found in wild and cultivated 348 conditions. 349 Mahaleb has developed over time a phenomenon of acclimatization 350 towards the surrounding environmental factors, such as terrestrial and 351 climatic environmental stresses. Its smooth, shiny leaves formed a way to 352 reflect sunlight, thereby avoiding its direct thermal effect, on the one hand, 353 and reducing evapotranspiration intensity on the other hand . 354 The environmental resilience of Mahaleb has created important roles in the 355 eastern Mediterranean forest that can play in progressive succession as 356 medium-sized trees within the climax community and as shrubs in the 357 reactionary succession within the deteriorating forest apogee community . 358 In cultivated land, there is an increasing interest in the cultivation of 359 Mahaleb in the last decades, as a promising tree in the hilly and 360 mountainous areas as a tree capable of withstanding the harsh environment 361 in addition to its good economic returns, low requirements, and resistance 362 to diseases, so its cultivation has spread steadily. Also, its wide 363 environmental range indicates the presence of many phenotypes suited for 364 promising agricultural areas. To achieve this, detailed studies should be 365 conducted to determine the critical (biotic and abiotic) stress boundaries of 366 Mahaleb trees in their natural habitats to elect acclimatized clones in each 367 environmental region. 368 It is also necessary to predict new potential areas for growth Mahaleb in 369 the eastern Mediterranean to increase production either by introducing its 370 cultivation in unconventional areas or by enhancing its productivity in the 371 areas currently cultivated, which appears to be an important issue soon. 372 Declarations 373 Ethics approval and consent to participate 374 Not applicable. 375 Consent for publication 376 Not applicable. 377 Availability of data and material 378 All data generated or analysed during this study are included in this 379 published article. 380 Competing interests 381 The authors declare that they have no competing interests. 382
2020-12-10T09:05:41.672Z
2020-12-03T00:00:00.000
{ "year": 2020, "sha1": "cf9955769343aad330a24b7a29d57fb8359d3ad4", "oa_license": "CCBY", "oa_url": "https://doi.org/10.21203/rs.3.rs-117177/v1", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "2e13907cc0d6e25a2e284adb6fea132cdd3c1550", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
54591816
pes2o/s2orc
v3-fos-license
Motivational Needs of Family and Consumer Sciences Education Students The purpose of this study was to examine the motivational needs of secondary students enrolled in family and consumer sciences (FCS). The study was based on McClelland's motivational needs theory. Results indicated that FCS students were motivated by the need for achievement more than the need for affiliation and by the need for affiliation more than the need for power. FCS students who became members of Family, Career, and Community Leaders of America (FCCLA) had a higher need for affiliation and power than those who were not members. Introduction Theories of motivation have been a focus of study throughout the 1900s.In Digital Library and Previous studies of student motivation have been concerned with the factors that educators believe motivated students rather than with the motivational needs as perceived by students (Crump, 1995;Dembrowsky, 1990;Horne, 1991).In contrast, Turner and Herren (1997) addressed the motivational needs of students enrolled in secondary agricultural education classes.They discovered that students enrolled in secondary agricultural education classes were motivated by the need for achievement.Turner and Herren further determined that students enrolled in secondary agricultural education classes who were members of FFA had a greater need for achievement, affiliation, and power than those students who were not FFA members.Like the Turner and Herren study, this study focused on the motivational needs of career and technical education students who were enrolled in family and consumer sciences (FCS) classes. Family and consumer sciences (until 1993 known as home economics) is a subject area that focuses primarily on the family.Family and Consumer Sciences Education (FCSE) empowers individuals and families to manage the challenges of living and working in a diverse society.The unique focus of FCSE is on the functioning of families and their interrelationships with work, community, and society (Redick, 1998).The recurring, practical problems of individuals and families that are usually interrelated and interdependent are addressed in FCS.In FCS, an integrative approach is used to present the curriculum.This approach helps individuals and families identify, create, and evaluate goals and alternative solutions to significant problems of everyday life and to take responsibility for the consequences of their actions (Redick, 1998).Family and consumer sciences programs are offered in most high schools in America. An integral part of the FCS program is the youth organization known as Family, Career, and Community Leaders of America (FCCLA), formerly Future Homemakers of America/Home Economics Related Occupations (FHA/HERO); the name changed was approved during the 1999 national meeting in Boston.Therefore, the current name of the youth organization will be used to report the findings of this study.The mission of FCCLA is to promote personal growth and leadership through FCS.FCCLA focuses primarily on the multiple roles of men and women as family members, wage earners, and community leaders.Activities of this organization help members develop skills in character building, creative and critical thinking, interpersonal communication, practical knowledge, and vocational preparation (Vaughn, Vaughn, & Vaughn, 1987). Enrollment in vocational youth organizations generally (Hannah, 1993) and FCCLA specifically (Nicholson, 1994) has declined or remained stagnant for at least two decades.Lack of participation in FCCLA could be detrimental to FCS programs.According to Anderson and Wooldridge (1995), FCCLA chapters were a means of giving growth experiences to students as well as improving FCS education programs; the experiences which students encounter through FCS enhance classroom learning (FHA/HERO Chapter Handbook, 1991).Yet, enrollment has declined in some FCS programs, and in order to participate in FCCLA, a student must be currently or previously enrolled in a FCS class.FCCLA and FCS are intricately tied together.Therefore, it was proposed that the needs of students enrolled in FCS education programs be identified and then utilized to increase membership in both FCS and FHA/HERO. Theoretical Framework Motivation research in education has centered on goal theories (Brophy, 1983;Ford, 1992;Locke & Latham, 1990) and has a long standing history in studies of psychology.However, goal theories do not really address the issue of what energizes or moves behavior.On the other hand, needs theories are based on the idea that people have different needs, and searching to satisfy those needs is what motivates, energizes, or moves behavior.Needs provide the force for all behavior including perception, thought, and action (Pintrich & Schunk, 1996).Therefore, a needs-based theory was chosen for this study. Various theories were examined in understanding the concept of motivation (e.g., Alderfer's ERG -existence, relatedness, and growth -theory, 1972;Herzberg's two factor theory, 1971;Maslow's need hierarchy, 1954;& McClelland's motivational theory, 1987).These theorists tried to answer the basic question of what causes or stimulates behavior by conceptualizing needs or motives that cause people to behave in a certain way.According to some researchers (Chusmir, 1989;Wong & Csikszentmihalyi, 1991), McClelland's three factors of intrinsic motivation are applicable and relevant when studying human behavior.Therefore, the motivational theory developed by McClelland (1987) was selected for the theoretical foundation of this study.McClelland's (1955McClelland's ( , 1984) ) theory described three different types of motivational needs: (a) the need for achievement (nAch), (b) the need for affiliation (nAff), and (c) the need for power (nPower).McClelland's (1987) theory is based on the belief that most people are motivated toward a certain pattern of behavior by one or a combination of the three needs.Furthermore, his theory suggested that intrinsic motivators are critical for meeting the needs of students because they describe a pattern of how a person may behave. The nAch is behavior directed toward competition with a standard of excellence.Characteristics of high achievers are (a) a strong desire to assume personal responsibility for performing a task or finding a solution to the problem, (b) a tendency to set moderately difficult goals and take calculated risks, and (c) a strong desire for performance feedback especially in quantitative form.According to McClelland (1987), this need is shaped in part rather early in life by culture and in part by varying techniques of parenting. The nAff is a desire to establish and maintain friendly and warm relations with other individuals.Characteristics of individuals with a high need for affiliation are (a) a strong desire for approval and reassurance from others, (b) a tendency to conform to the wishes and norms of others when pressured by people whose friendships they value, and (c) a sincere interest in the feelings of others.Persons with a high nAff are attracted to tasks involving groups (McClelland, 1984).Students with this need would tend to be the peacemakers, the team members, and the social coordinators (McClelland, 1984).These students enjoy the challenge of group work.They want to be accepted by the group so therefore they tend to listen, compromise, and enable a group to move forward. The final motive in McClelland's (1987) theory is the nPower.This need is explained as the need to control others, to be responsible for them, and to influence their behavior.Characteristics of individuals with a high nPow are (a) a desire to influence and direct somebody else, (b) a desire to exercise control over others, and (c) a concern for maintaining leader-follower relations.People with a high nPower tend to win arguments, persuade others, and seek power positions.McClelland suggested that there are two faces of power.The first face has a negative connotation; one that is concerned with having one's way by controlling and dominating others.The other face of power is called "social" or "institutional."Social or institutional power reflects the process of leadership that uses persuasion and inspiration to help people achieve, to be happy, and to learn.This type of person is one who helps people form and attain goals while not dominating them. As McClelland's (1987) theory indicates, by identifying the motivational needs of FCS students, their behavior may be predicted.Additionally, examination of motivational needs of students can help structure FCS programs to meet the needs of students while at the same time maintaining membership in FCCLA. Purpose The main purpose of this survey study was to determine the motivational needs of students enrolled in FCS programs.A secondary purpose was to determine and compare the motivational needs of FCS students who were members and nonmembers of FHA/HERO.Research questions for this study were: 1. What motivational needs do students enrolled in secondary FCS programs exhibit in relation to nAch, nAff, and nPower?2. Do differences occur in relation to nAch, nAff, and nPower of students enrolled in FCSE classes? 3. Are there differences between FCCLA members and nonmembers enrolled in FCSE classes in the nAch, nAff, and nPower? Sample The target population included all students in Georgia, grades 9-12, enrolled in 207 FCSE programs having a nationally affiliated FCCLA chapter which totaled 7,988 students.Cluster sampling was chosen to identify programs for this study.Twelve schools were randomly chosen with two schools selected from each of the six Georgia Department of Education districts to ensure an adequate sample size.FCSE programs with affiliated FCCLA chapters were sorted according to district then selected through a drawing. Procedure Phone calls were made to the 12 program instructors selected in the random drawing to describe the study and request their participation.A cover letter requiring a principal signature, an instruction sheet, and appropriate number of surveys for each class was sent to the school.Instructors received a self-addressed, stamped manila envelope for returning completed surveys.Follow-up phone calls were made to all teachers to thank them for returning the surveys or to remind them to return them as soon as possible.All of the teachers from the 12 schools who were invited to participate administered and returned a total of 1,030 student surveys. Instrument The instrument used for measuring motivation needs was developed by Turner (1996) in a study of Agricultural Education students and FFA members.Turner modified the questions from an instrument used by Chusmir (1989).The questions were developed based on the three qualities of achievement, affiliation, and power identified by McClelland (1987).Five statements focused on nAch, nAff, and nPower for a total of 15 statements.An example of a nAch statement is: I try to win as many awards as I can.An example of a nAff statement is: I try to work in a group instead of by myself.An example of a nPower statement is: I tend to organize and direct the activities of others.A 5-point Likert scale was used (1 = strongly agree, 2 = agree, 3 = undecided, 4 = disagree, 5 = strongly disagree).Although recent arguments have been established for using Likert-type scales without an undecided choice, Chang (1997) stated that there seems to be little difference in findings as long as the numerical scale is clearly defined and consistent which was the case in this study. Based on Litwin (1995) and Nunnaly (1978) estimations, a score of .70 or higher on the Cronbach's alpha suggests good reliability.In Turner's (1996) study, the instrument had an overall Cronbach's alpha of .82.For this study, the overall instrument showed a Cronbach's alpha score of .78,slightly lower than that of Turner's, but well above the .70recommended. Data Analysis Means and standard deviations for each construct were calculated to determine motivational needs.One-way Analysis and variance (ANOVA) tests were calculated with the level of significance established at .05.Upon finding significance with the omnibus tests, Tukey HSD was completed to adjust for multiple comparisons of the same data. Findings On each scale, the three factors were summed to create a composite score ranging from a low of 5 (strongly disagree) to a high of 25 (strongly agree) where 5.0 to 9.00 was strongly disagree, 9.01 to 13.00 was disagree, 13.01 to 17.00 was undecided, 17.01 to 21.00 was agree, and 21.01 to 25.00 was strongly agree.The highest mean was for nAch (M = 19.09,agree) and nPower had the lowest mean (M = 16.91,undecided) for students enrolled in FCS programs (see Table 1).In the omnibus test, the FCS students expressed a higher nAch than nAff, but a higher nAff than nPower at the .05level.Th nAch represents the students' primary motivational need; however, the other needs were present.The ANOVA found that there were statistically significant differences in the students' nAch, nAff, and nPower.The nAch (Table 1) was significantly greater than the nAff, while the nAff was significantly greater than the nPower.Family and consumer sciences students were grouped by membership/nonmembership in FCCLA to determine if differences existed in the nAch, nAff, and nPower.As shown in Table 2, the results showed statistically significant differences at the .05level for nAff and nPower based on FCCLA membership status.Family and consumer sciences students who were members had a higher nAff (M = 17.77) and a higher nPower (M = 16.92).There was no statistically significant difference in members and nonmembers on the nAch. Conclusions, Discussion, and Recommendations for Further Research First, FCS students in this study had a higher nAch and a higher nAff than a nPower.Second, there were statistically significant differences in the nAch, the nAff, and the nPower for FCS students.Third, FCS students who were members of FCCLA had a higher nAff and a higher nPower than FCS students who were not members of FCCLA.Fourth, both FCCLA members and nonmembers had a nAch.From these findings, we concluded that the students in this study who were enrolled in FCSE classes were intrinsically motivated. According to McClelland's (1987) theory, most people have either one or a combination of the three needs (nAch, nAff, and nPower) which motivate them toward a pattern of behavior; this principle holds true for students enrolled in FCS classes who participated in this study.Findings showed that the three needs (nAch, nAff, and nPower) were present in FCS students.The nAch was the most dominate need for these students; however, they still had a nAff and a nPower. For this study, achievement was defined as success in relation to an internalized standard of excellence.According to McClelland and Steele (1973), if a student is motivated by the nAch, he/she may enjoy finding solutions to problems, taking moderate risks, competing, and accepting feedback on performances.The nAff was defined as close interpersonal relationships and friendships with other people.McClelland (1987) described affiliation as an activity where a group or team must rely on each other for the outcome.Finally, nPower was defined as a need to control or exercise influence and make decisions (Chusmir, 1989). McClelland described an individual with this need as a leader. From the findings of this study, we can infer that FCS students are intrinsically motivated.Intrinsic motivation is when a person chooses or undertakes an activity for the feeling of accomplishment of the activity, their interest in the activity, or the enjoyment that the activity provides (Tripathi, 1992).According to Tyler (1976), the needs of learners should be used as a source of determining educational objectives.These writers encourage teachers to provide instructional programs which allows students to fill their nAch, nAff, and/or nPower through the use of various models of teaching. A model of teaching is a description of a learning environment.Several models of teaching are available to FCS teachers; however, only two models will be identified and recommended for use in meeting the needs of FCS students.To accommodate the motivational needs of students, FCS teachers may use models of nondirective teaching and direct instruction.The nondirective teaching model focuses on facilitating learning.As facilitator, the teacher helps students explore new ideas about their lives, their schoolwork, and their relations with others.The model creates an environment where students and teachers are partners in learning, share ideas openly, and communicate honestly with one another (Joyce & Weil, 2000).On the other hand, direct instruction refers to a pattern of teaching that consists of the teacher's explaining a new concept or skill to a large group of students, having them test their understanding by practicing under teacher direction, and encouraging them to continue to practice under teacher guidance (Joyce & Weil).Individually each model has purposes and features; however, collectively they promote selfdiscipline, problem-solving abilities, and collaboration.It is projected that the implementation of these models of teaching will help FCS students meet their motivational needs, enhance instruction in FCS classes, and promote participation and growth in FCCLA. The mean scores for nAch, nAff, and nPower of FCS students indicated that their needs were parallel to the goals and principles of a quality FCSE program with an integral FCCLA chapter.That is, in a quality FCSE program the needs of the students will be emphasized when following state and national standards and the purposes of FCCLA.A quality FCSE program has opportunities for achievement through instructional activities and FCCLA projects.A quality FCSE program provides opportunities for affiliation, hands-on learning in group situations, and integral inclusion or co-curricular approach of FCCLA.Students also have the opportunity to become leaders within the classroom setting as leaders of their groups and/or leaders in classroom FCCLA activities.The findings of this study should provide insight to educators who wish to utilize the models of teaching to meet the motivational needs of students. Following are several recommendations for further study.First, and most importantly, local instructors should conduct program evaluations to determine whether the instructional program is meeting student motivational needs.Second, the state and national association of FCCLA should continue to conduct research on needs of members to ensure quality programming of the organization.Next, it is recommended that FCS teacher educators work diligently to help prospective FCS teachers understand the benefits of the FCCLA program for in meeting the motivational needs of their future students.Fourth, studies may be conducted in other the career and technical student organizations to determine needs of students in the respective organizations.Lastly, a study of the role of gender, grade level, and/or ethnic background may provide further information and understanding of student motivational needs and student organizations. Table 1 Means, Standard Deviations, and Analysis of Variance for Family and Consumer Sciences Students and the Need for Achievement, Affiliation, and Power
2018-12-03T06:59:54.187Z
2002-05-01T00:00:00.000
{ "year": 2002, "sha1": "44804f1c901a8431b83d8b0bd69bfbeec2d30c60", "oa_license": "CCBY", "oa_url": "https://doi.org/10.21061/jcte.v18i2.606", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "44804f1c901a8431b83d8b0bd69bfbeec2d30c60", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Psychology" ] }
236314702
pes2o/s2orc
v3-fos-license
Chromosomal Abnormalities Associated With Local Recurrence or Pulmonary Metastasis of Giant Cell Tumor of Bone in Thai Adults: A Prospective Cohort Study With 6 Years of Follow-Up Background: Giant cell tumor (GCT) of bone demonstrates chromosomal abnormalities. This study aimed to investigate the prognostic role of chromosomal abnormalities of primary GCT of bone relative to local recurrence or pulmonary metastasis.Methods: This prospective longitudinal cohort study with 6 years of follow-up included consecutive patients with primary GCT of bone that were surgically treated during 2011 to 2013 at the Department of Orthopaedic Surgery, Faculty of Medicine Siriraj Hospital, Mahidol University, Bangkok, Thailand. All patients underwent surgical resection with extended intralesional curettage and phenol local adjuvant therapy. Systematic cytogenetic analysis compared cytogenetic abnormalities between patients with and without local recurrence or pulmonary metastasis. Fifteen patients were eligible, enrolled, and had successful cytogenetic analysis.Results: The median follow-up time was 46 months (interquartile range [IQR]: 32-58). Five patients experienced local recurrence or pulmonary metastasis with a median time to recurrence of 6 months (IQR: 3.25-10.5). The mean number of abnormal cells in the primary culture compared between those with local recurrence or pulmonary metastasis and those without was 24.4 vs. 9.6 cells, respectively (p=0.04). A similar pattern was observed in the cultures of the subsequent four passages (all p<0.05). Forty-five patterns of clonal telomeric association (tas) were observed in passaged cultures. Six tas patterns were associated with local recurrence or pulmonary metastasis, including tas(11;19)(p15;q13.4), tas(15;19)(q26.3;q13.4), tas(15;22)(p13;p13), tas(16;19)(p13.3;q13.4), tas(17;19)(p13;q13.4), and tas(19;22)(q13.4;q13).Conclusions: The mean number of abnormal cells and the six identified TAS patterns may be valuable prognostic factors for local recurrence or pulmonary metastasis of GCT tumor of bone. Background Giant cell tumor (GCT) of bone is the most common benign aggressive tumor of bone. The incidence of this tumor in Asian population is approximately 15% [1][2][3][4], and postoperative recurrence rates ranging from 18-65% have been reported [5][6][7]. The recurrence rate is approximately 17.4% within the rst ve years of follow-up [8]. Intralesional curettage with the subsequent application of different adjuvant substances to control local recurrence of GCT of bone have been studied, including warm Ringer's lactate solution, phenol, bone cement, zoledronic acid-loaded bone cement, and liquid nitrogen [9][10][11][12]. Campanacci, et al. reported the presence of tumor at the surgical margin to be the prognostic factor that most strongly predicts recurrent GCT [13]. Other potential predictors have been extensively studied; however, radiographic ndings, histologic appearance, and immune-histochemical stain, such as Ki-67, were not able to predict local aggressiveness and/or the metastatic potential of the tumor after surgery [14][15][16][17][18][19]. Conversely, abnormalities found in genetic studies of DNA or chromosomal study of the tumor were strongly associated with local recurrence and lung metastasis [20][21][22][23]. Bridge, et al. found chromosomal abnormalities in GCT of bone, particularly telomeric fusion. As a consequence of that nding, cytogenetic analysis has been suggested as a potentially useful tool for predicting the aggressiveness of a tumor [24]. However, few cytogenetic studies have been reported to date. Accordingly, the aim of this study was to identify chromosome abnormalities in GCT of bone that are signi cantly associated with the primary outcomes of our study by subjecting tissue taken from primary lesions to cytogenetic analysis and then comparing cytogenetic abnormalities between patients with and without local recurrence or pulmonary metastasis. Methods This prospective longitudinal cohort study with 6 years of follow-up included consecutive patients with primary GCT of bone that were surgically treated during 2011 to 2013 at the Division of Orthopaedic Oncology of the Department of Orthopaedic Surgery, Faculty of Medicine Siriraj Hospital, Mahidol University, Bangkok, Thailand. The Siriraj Institutional Review Board (SIRB) approved our study protocol (COA no. 418/2556 (EC4)), and all patients provided written informed consent to participate. All methods were carried out in accordance with relevant guidelines and regulations. Complete history, physical examination, and imaging investigations, including plain radiograph at the lesion, plain radiograph of the chest (posteroanterior and lateral views), computed tomography (CT) chest, bone scan, and magnetic resonance imaging/CT scan at the lesion, were performed in all patients. Open biopsy was performed for de nite diagnosis in all patients. If the pathological diagnosis was conventional GCT of bone, the patient was enrolled in the study. Demographic and clinical data, including age, gender, location of the tumor (axial/nonaxial skeleton), and preoperative Campanacci GCT tumor classi cation, were recorded in all patients. All patients then underwent de nitive treatment using extended intralesional curettage, and 1 cubic mm of the solid parts of the tumor was sent for cytogenetic study. The primary tumor tissues derived intraoperatively were added into sterile tubes containing fetal bovine serum cell culture media, and then they were immediately transferred to our cytogenetic laboratory. Local treatment with high-speed burr, electrocautery, and phenol was performed, and bone cement was used to ll the bony lesions. After surgery, patients were followed-up every month during the rst year, and then every three months after that for six years consistent with routine practice at our The specimens were subsequently mechanically disaggregated by mincing with a scalpel blade, and enzymatically by incubation in collagenase (Biochrom AG). Short-term culture was performed following the Mandahl protocol. After removal of the collagenase solution, the cells were resuspended in DMEM with 20% fetal calf serum. The cultures were placed in a humidi ed 37°C, 5% CO 2 incubator. After 10-14 days of cultivation, the cells were in log phase. Three hours before harvesting, the cells were exposed to colchicine (0.25 µg/mL). The cell suspension was then resuspended in KCI Solution (hypotonic) 0.075 M (Sigma-Aldrich) twice, and then xed three times with a 3:1 mixture of methanol and acetic acid. The cell suspension was dropped on slides, and Q-banding was performed. Serial passaging was then performed on the cultures of all patients to study the dynamics of chromosome changes in GCT of bone. The culture conditions were as previously described, and the cell cultures were split 1:2 at subcon uency. The samples were studied from primary culture to ve passages [25]. The 50 metaphases in every passage were analyzed according to the International System for Human Cytogenetic Nomenclature 2013. The number of abnormal cells and the total number of abnormalities and types of abnormalities, including numerical, structural, and telomeric association, were recorded. Association of the short arm of the acrocentric chromosome was also recorded as telomeric association. Nonclonal and clonal chromosome changes, and the frequencies of the telomerase associations were calculated. Study size and statistical analysis An a priori sample size was calculated for this study. All eligible patients were included. Categorical variables are presented as frequency and percentage, and normally distributed continuous variables are given as mean plus/minus standard deviation. Unpaired t-test was used to compare the means of the local recurrence or pulmonary metastasis group with the group that didn't have either of those outcomes. A p-value of less than 0.05 was considered statistically signi cant, and all data analyses were performed using SPSS Statistics version 18 (SPSS, Inc., Chicago, IL, USA). Results A total of 15 patients were screened for eligibility and enrolled. No patients were lost to follow-up or died during the study period. All participants completed the study protocol, including successful cytogenetic analysis. Patient demographic and clinical characteristics are shown in Table 1. The mean age of included patients was 35.4 years. Of the 15 included patients, 10 (66.7%) were female. Most of our GCT cohort had non-axial skeletal involvement (86.6%) and Campanacci stage 2 (66.7%). There was only one case (6.7%) of con rmed pulmonary metastasis prior to operation. Five patients (33.3%) had either local recurrence or pulmonary metastasis during the 6-year postoperative follow-up period. The mean number of abnormal cells in the primary culture of patients with either study outcome of interest was signi cantly higher than in those without (24.4 vs. 9.6 cells, respectively; p=0.04). Similar patterns were found in subsequent cultures for four passages (p<0.05) ( .4;q13), were consistently found in the cultured samples from primary GCT of bone from patients with recurrence (Figures 1-6). Similar abnormal observations were found in all passaged cultures. Discussion Prediction of local recurrence or pulmonary metastasis of GCT of bone after surgery is a major challenge for orthopedic oncology surgeons. The recurrence rate of GCT and/or pulmonary metastasis was reported to range from 0% to 4% [26][27][28][29][30]. Early detection of patients at high risk for recurrence would improve outcomes; however, a proven and accepted list of independent predictive factors has not yet been established. Some predictors of recurrent GCT of bone have been explored and reported, including serum alkaline phosphatase, serum calcium, serum phosphate, serum vitamin D level, and bone mineral [14][15][16][17][18][19][20][21][22][23], yet none of these are used in clinical practice. Some centers use Campanacci staging, which is a system that was designed to stage only GCT [13]. Gorunova, et al. suggested that local recurrence probably depends more on the adequacy of surgical treatment than on the intrinsic biology of the tumors [32]. In contrast, other authors proposed that the intrinsic biology of the tumors, such as DNA, may be associated with the recurrence of GCT of bone [33][34][35]. Jeffrey, et al. found telomeric association and del(11)(p11) to have strong association with local recurrence of GCT of bone [36]. Bridge, et al. also stated that telomeric fusion and cytogenetic analysis might be useful in predicting the biologic behavior of GCT of bone [24]. Therefore, the present study was conducted with cytogenetic study to identify chromosomal abnormalities that may predict recurrent GCT of bone. The results of the cytogenetic analysis in the present study showed a signi cantly higher number of abnormal cells in the recurrent group compared with the nonrecurrent group. This nding could be clearly established at the primary culture or the rst passage of the tumor. Furthermore, we found six consistent patterns of cytogenetic abnormality only in our recurrent cases. The cytogenetic analysis results were also similar in the later passages (the second to fth passages of the DNA culture). The duration of each passage was approximately 1 week to cytogenetically analyze the aggressiveness of the tumor. These ndings are very important because the mean number of cells in the primary culture was adequate for identifying the aggressiveness of the intrinsic tumor biology. The results of this study provide credible evidence that cytogenetic analysis may be a dependable method for predicting the likelihood of tumor recurrence in GCT of bone. Our study had a similar result to that of Unni, et al. who reported the most common abnormal karyotypic nding in recurrence of GCT of bone to be tas11p, 15p, 19q and 21p [36]. Limitations This study has some mentionable limitations. First, this study may have been limited by interobserver variation during the cytogenetic study due to the high technical skill required. Second, we did not conduct a genome sequencing study because it is necessary that we identify chromosomal abnormalities via cytologic study rst. Third, the sample size was small, so adjustment for confounding factors or prognostic modeling could not be performed. Fourth and last, the patients that were enrolled in this study were recruited from a single center. The strength of this study is its prospective longitudinal cohort design with a 6-year follow-up period. Conclusion The mean number of abnormal cells and six tas patterns, including tas ( All methods were carried out in accordance with relevant guidelines and regulations. This study was retrospectively registered at gov (TCTR20210427003) Consent to participate: Informed consent was obtained from all individual participants included in the study. Consent for publication: All authors have read and approved the nal submitted manuscript. Authors' contributions: Pakpoom Ruangsomboon and Rapin Phimolsarnti provided research questions, conducted data collection, analyzed data, discussion and developed the full manuscript. Worapa Heepchantree, PhD. And Pissanu Reingrittha, MD examined all data analysis, detailed the results, statistical calculation and collected and monitored data. Saranatra Waikakul, MD also provided the research question and useful advice Con ict of interest declaration All authors declare no personal or professional con icts of interest, and no nancial support from the companies that produce and/or distribute the drugs, devices, or materials described in this report. Availability of data and material The datasets used and/or analysed during the current study available from the corresponding author on reasonable request. Consent for publication All authors have read and approved the nal version of the manuscript, and all authors agree with the decision to submit this manuscript for journal publication.
2021-07-26T00:05:56.750Z
2021-06-10T00:00:00.000
{ "year": 2021, "sha1": "0c8ffecd33e16882373a2b4ac9fc4473630b40da", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-538122/v1.pdf?c=1631898791000", "oa_status": "GREEN", "pdf_src": "MergedPDFExtraction", "pdf_hash": "ca43f4c138b8cd9606834a1071c76b7df2c26a41", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
258998985
pes2o/s2orc
v3-fos-license
Time trends in the incidence of cardiovascular disease, hypertension and diabetes by sex and socioeconomic status in Catalonia, Spain: a population-based cohort study Objective We aimed to estimate how longitudinal trends in cardiovascular disease, hypertension and type 2 diabetes mellitus incidence in Catalonia, Spain from 2009 to 2018 may differ by age, sex and socioeconomic deprivation. Design Cohort study using prospectively collected data. Setting Electronic health records from primary healthcare centres in Catalonia, Spain. Participants 3 247 244 adults (≥40 years). Outcome measures We calculated the annual incidence (per 1000 persons-year) and incidence rate ratios (IRR) between three time periods of cardiovascular disease, hypertension and type 2 diabetes mellitus to measure trends and changes in incidence during the study period. Results In 2016–2018 compared with 2009–2012, cardiovascular disease incidence increased in the 40–54 (eg, IRR=1.61, 95% CI: 1.52 to 1.69 in women) and 55–69 age groups. There was no change in cardiovascular disease incidence in women aged 70+ years, and a slight decrease in men aged 70+ years (0.93, 0.90 to 0.95). Hypertension incidence decreased in all age groups for both sexes. Type 2 diabetes mellitus incidence decreased in all age groups for both sexes (eg, 0.72, 0.70 to 0.73 in women aged 55–69 years), except for the 40–54 year age group (eg, 1.09, 1.06 to 1.13 in women). Higher incidence levels were found in the most deprived areas, especially in the 40–54 and 55–69 groups. Conclusions Overall cardiovascular disease incidence has increased while hypertension and type 2 diabetes mellitus incidence have decreased in the last years in Catalonia, Spain, with differences in trends by age group and socioeconomic deprivation. In methods, it would be important to show variations in SES in all groups in this period. Some studies found that even in most deprived groups, little variations in wage, human development index or social vulnerability index can promote profound changes in cardiovascular mortality. Moreover, it would be helpful if terms "most deprived" and "less deprived" could be translated to numbers (i.e. wages, income) in manuscript file, not only in supplementary material. In discussion, I think it would be important comparisons with other studies in a global manner even when specific community are being studied like yours, especially in countries with low income. Thank you for the opportunity. Hope I could be helpful. area level SES. With some exceptions, CVD incidence increased between 2009 and 2018, and HTN and T2DM incidence have decreased across the different strata. Overall, the manuscript is clearly written; the aim is clear, the applied methodology seems appropriate and overall presentation and interpretation of findings is clear and in line with the tables and figures. I have a some comments/suggestions to improve the manuscript: 1. Although trends are assessed stratified by sex, this is not mentioned in either the title or the introduction. Various previous studies provide insight into sex differences in CVD incidence. I suggest adding this to your introduction and consider adding this to your title as well. 2. On page 3 line 59 it is stated that only individuals free of cardio metabolic conditions are included. Excluding people with those conditions creates a bias towards the estimation of CVD. HTN and T2DM are CVD risk factors and often precede cardiovascular disease, by excluding those people the CVD incidence is probably underestimated. 3. It is not clear to me what you mean with 'minimize the potential inclusion of prevalent cases' in the methods page 4 line 3. As I understood you only included people free from cardio metabolic conditions at baseline, thus prevalent cases cannot be included. 4. The aim of your study is to estimate how time trends differ by age, sex and SES. However, you did not calculate the statistical significance of the time trends. By doing this and also looking at interaction between time and age, sex and SES, conclusions about differing time trends would be more substantiated. 5. The analyses on SES have only been done in people from urban areas. I suppose this group is the total cohort minus the almost 1 million people for whom SES is missing? This may affect interpretation of the results. Therefore I suggest to clearly state this in the results and/or discussion section when interpreting the SES results. 6. Figure 2 shows incidence rate ratios of CVD, HTN and T2DM stratified by age and sex. Although not in all age strata, to me there seem to be some (significant) sex differences in increase or decrease of the conditions between the time periods. Since the aim was to investigate incidence differences across sex as well I would like to see a short comment on sex differences. In addition, the vertical dotted axes for the three conditions are not aligned. Aligning these would make interpretation more easy. VERSION 1 -AUTHOR RESPONSE Response to Reviewer 1: Congratulations to the authors. Socioeconomic status is a neglected cardiovascular risk factor and studies which link them with cardiovascular diseases can emphasized its importance in cardiovascular prevention. 1. In background, I suggest the use of most recently GBD data. We thank the Reviewer for this comment and have updated data accordingly to include the most recent GBD data from 2019. Introduction: Cardiovascular disease (CVD) is the leading cause of death worldwide, being responsible for nearly 18.6 million global deaths in 2019 (1). References: 1. Roth GA, Mensah GA, Johnson CO, Addolorato G, Ammirati E, Baddour LM, et al. Global Burden of Cardiovascular Diseases and Risk Factors, 1990- In methods, it would be important to show variations in SES in all groups in this period. Some studies found that even in most deprived groups, little variations in wage, human development index or social vulnerability index can promote profound changes in cardiovascular mortality. Moreover, it would be helpful if terms "most deprived" and "less deprived" could be translated to numbers (i.e. wages, income) in manuscript file, not only in supplementary material. We thank the Reviewer for these suggestions and we agree that more detailed information on SES would be interesting and relevant to study in the context of cardiometabolic conditions. To consider SES, our study used the MEDEA index (Mortalidad en áreas pequeñas españolas y desigualdades socioeconómicas y ambientales) which is a composite indicator designed for the Spanish population that takes into account unemployment, manual work, temporary work, and educational level (please see supplementary material and reference number 17 cited in the manuscript: Domínguez-Berjón et. al, 2008). Because the indicator does not take into account quantitative indicators on SES (such as wages or income) nor do we have access to data from the human development or social vulnerability indexes for our study population, unfortunately we are unable to include this information in our study or provide quantitative cutoffs in the presentation of our results. In the context of our study, the MEDEA index is used as an orienting and comparative indicator to show differences between a quintile relative to the others. 3. In discussion, I think it would be important comparisons with other studies in a global manner even when specific community are being studied like yours, especially in countries with low income. We thank the Reviewer for this suggestion. We agree that an inclusion of a variety of different national and cultural contexts is important to discuss and understand the results of our study. However, to our best knowledge, there are no studies from low income countries that study longitudinal time trends of our cardiometabolic conditions of study or that allow for a direct comparison or discussion with our results. Thus, we were unable to include other national or global contexts in our discussion. However, we have added a mention of the current understanding of cardiometabolic conditions in middle and low income countries to the introduction section of the manuscript as a means of situating the reader within the global context. Introduction: Previous epidemiological studies have found that trends in CVD (5), HTN (6), and T2DM (7) incidence are stabilizing or declining in most high-income countries while cardiometabolic diseases are on the rise in low-and middle-income countries (8). Response to Reviewer 2: This study examined longitudinal trends in incidence of cardiovascular disease (CVD) and two CVD risk factors: hypertension (HTN) and diabetes mellitus type 2 (T2DM). Incidence trends are described between 2009 and 2018 in a large Catalonian population of more than 3.2 million adults aged 40 years or older. Statistical analyses were stratified by sex, age and area level SES. With some exceptions, CVD incidence increased between 2009 and 2018, and HTN and T2DM incidence have decreased across the different strata. Overall, the manuscript is clearly written; the aim is clear, the applied methodology seems appropriate and overall presentation and interpretation of findings is clear and in line with the tables and figures. I have a some comments/suggestions to improve the manuscript: 1. Although trends are assessed stratified by sex, this is not mentioned in either the title or the introduction. Various previous studies provide insight into sex differences in CVD incidence. I suggest adding this to your introduction and consider adding this to your title as well. We thank the Reviewer for this suggestion. We agree and have updated the manuscript title and added relevant information to the introduction section as well. Title: "Time trends in the incidence of cardiovascular disease, hypertension, and diabetes by sex and socioeconomic status in Catalonia, Spain: a population-based cohort study" Introduction: However, few studies to this date have taken into account possible differences by population subgroups, despite evidence of sex differences in cardiometabolic disorders (8) and evidence that individuals of a low socioeconomic status (SES) are at a greater risk of developing CVD (9), HTN (10), and T2DM (11). 2. On page 3 line 59 it is stated that only individuals free of cardio metabolic conditions are included. Excluding people with those conditions creates a bias towards the estimation of CVD. HTN and T2DM are CVD risk factors and often precede cardiovascular disease, by excluding those people the CVD incidence is probably underestimated. We thank the Reviewer for pointing out this potential source of confusion. We would like to clarify that, in this case, we excluded individuals from the incidence calculations who were already diagnosed with a cardiometabolic condition at baseline or prior to the start of the study period as a means of excluding prevalent cases of CVD, HTN, and T2DM from the incidence calculations. In other words, for each condition, we excluded individuals with prior history of that specific condition. For example, for CVD, we excluded individuals with history of CVD, but we did not exclude individuals with history of T2DM or HTN. For this reason, we believe that the effect of this exclusion criterion on our calculations of CVD incidence is minimal. To avoid confusion, we have added updated the statistical analyses subsection of the manuscript to include this same example and justification: Methods: We calculated the overall incidence rate of CVD, HTN, and T2DM for each study year from 2009-2018, stratified by sex. Incidence was calculated as the number of new cases of each condition divided by 1,000 person-years of follow-up, and person-years were calculated as the number of years each individual was at risk of developing one of the three conditions during the study period. Next, we calculated yearly incidence rates of all three conditions, stratified by sex and age group. Finally, we restricted these same analyses to individuals living in urban areas and stratified additionally by SES. Individuals diagnosed with a CVD, HTN, or T2DM were not considered as eligible incident cases for the same condition in future years after the year of their first diagnosis. For each condition, we excluded individuals with prior history of that specific condition. For example, for CVD, we excluded individuals with a history of CVD, but we did not exclude individuals with a history of T2DM or HTN. 3. It is not clear to me what you mean with 'minimize the potential inclusion of prevalent cases' in the methods page 4 line 3. As I understood you only included people free from cardio metabolic conditions at baseline, thus prevalent cases cannot be included. We thank the reviewer for pointing this out this potential source of confusion. We would like to clarify that the database we used for study (SIDIAP) is an electronic health records database that started recording patient data in 2006. While patient records were being transferred into the database during data recording in the first years of the database, diagnoses may have been mis-recorded as a first diagnosis at the time of entry into the database (and therefore considered an incident case in the year it was recorded), despite the patient having received the diagnosis years prior. For this reason, we chose to exclude the first three years of data registry from our study period to avoid the inclusion of prevalent cases from these years. To avoid confusion, we have updated out manuscript to include a short explanation and justification of this decision: Methods: Though SIDIAP has data available starting in 2006, we chose for our study period to begin in 2009. During the first years of construction of the database, diagnoses may have been mis-recorded as a first diagnosis at the time of entry into the database (and therefore considered an incident case in the year it was recorded), despite the patient having received the diagnosis years prior. 4. The aim of your study is to estimate how time trends differ by age, sex and SES. However, you did not calculate the statistical significance of the time trends. By doing this and also looking at interaction between time and age, sex and SES, conclusions about differing time trends would be more substantiated. We thank the Reviewer for this suggestion. We calculated time trends and plotted the incidence results as a means of providing a visual and conceptual description of the changes in CVD, HTN, and T2DM incidence from 2009 to 2018. To test differences in these trends, we used incident rate ratios (IRRs) between different segments of our study period. We have included the 95% confidence intervals of the IRRs which we calculated and used them as the basis to discuss the differences in time trends between the different age and sex strata. 5. The analyses on SES have only been done in people from urban areas. I suppose this group is the total cohort minus the almost 1 million people for whom SES is missing? This may affect interpretation of the results. Therefore I suggest to clearly state this in the results and/or discussion section when interpreting the SES results. We thank the Reviewer for this suggestion. We have updated the methods and discussion sections to clarify this. Methods: Finally, we stratified additionally by SES. Analyses stratified by SES were limited to individuals residing in urban areas as the MEDEA index is not available for rural areas (please see supplementary material). Analyses by SES were restricted to individuals residing in urban areas for the same reason as mentioned previously Discussion: When stratifying by SES among individuals residing in urban areas, higher incidence levels were observed in the most deprived areas, especially in the youngest two age groups, despite CVD, HTN, and T2DM trends mirroring the overall trends. 6. Figure 2 shows incidence rate ratios of CVD, HTN and T2DM stratified by age and sex. Although not in all age strata, to me there seem to be some (significant) sex differences in increase or decrease of the conditions between the time periods. Since the aim was to investigate incidence differences across sex as well I would like to see a short comment on sex differences. In addition, the vertical dotted axes for the three conditions are not aligned. Aligning these would make interpretation more easy. We thank the Reviewer for this suggestion. We have updated the results section to include a mention of sex differences as well where relevant. Results: Increases in CVD incidence were steeper in women than men in all age groups and nearly all compared study periods. Decreases in HTN incidence tended to be steeper in men than women, especially in the 55-69 age group. However, T2DM incidence slightly decreased slightly more sharply for men than women in the 55-69 age group and more sharply in the 70+ age group in both sexes. With regards to Figure 2, we found that aligning the dotted vertical axes obscures the detail in differences in magnitude and direction of the associations. Presenting the IRRs for CVD on a different axis scale better reveals differences in the magnitude of the IRRs between the different sex and age strata. GENERAL COMMENTS The authors have answered adequately almost all questions except for "discussion". Thus, I still suggest authors looking at BMC Public Health, Portuguese Journal of Cardiology (Revista Portuguesa de Cardiologia) and BMJ Open itself, which contains studies using socioeconomic factors and its relationship with cardiovascular death. Even though it is only a suggestion, and it should not be used against the publication of this excellent study in the present Journal. Congratulations to the authors. Smits, Robin Amsterdam UMC Locatie AMC, Public and Occupational Health REVIEW RETURNED 09-Dec-2022 GENERAL COMMENTS I would like to congratulate the authors with this well written and interesting manuscript. My comments were addressed adequately. I have a one additional comment regarding the authors response and one suggestion that I would like to be considered: 1. To my suggestion of calculating the statistical significance of the time trends the authors responded "We calculated time trends and plotted the incidence results as a means of providing a visual and conceptual description of the changes in CVD, HTN, and T2DM incidence from 2009 to 2018. To test differences in these trends, we used incident rate ratios (IRRs) between different segments of our study period. We have included the 95% confidence intervals of the IRRs which we calculated and used them as the basis to discuss the differences in time trends between the different age and sex strata." It is not entirely clear to me why these periods were chosen. If we look at Figure 1 showing the incidences during the total study period, for example some trends seem to be quite stable. Therefore, my suggestion would still be to calculate a p-value for a time trend over all the years as well (e.g. with Cochrane Armitage test for trend). Currently we only see relative differences between time periods, and this may be misleading especially when not specifying why these time periods were chosen. To elaborate, addition of the overall trends and/or argumentation for the chosen time periods prevents the reader from getting the impression of cherry picking, i.e. reporting no general trend and choosing these time periods instead of other time intervals because of their significance. 2. Sex-stratified results were added where applicable. I noticed that the SES differences in incidence seem to be different for women and men (SES difference larger in women); or in other words, there may be an interaction between sex and SES. The authors may consider to add this observation if they feel it fits in the manuscript. VERSION 2 -AUTHOR RESPONSE Response to Reviewer 1: 1. The authors have answered adequately almost all questions except for "discussion". Thus, I still suggest authors looking at BMC Public Health, Portuguese Journal of Cardiology (Revista Portuguesa de Cardiologia) and BMJ Open itself, which contains studies using socioeconomic factors and its relationship with cardiovascular death. Even though it is only a suggestion, and it should not be used against the publication of this excellent study in the present Journal. Congratulations to the authors. We would like to thank the Reviewer for the thoughtful review and comments on our article. We agree that the studied association between SES and cardiovascular mortality is an interesting and relevant addition. Therefore, we have added the following sentence to the discussion section: Discussion: For example, low SES and education levels influence food behaviors (35), physical activity patterns and abilities (36), and access to preventative healthcare (37), all of which influence CVD, HTN,and T2DM risk,but also CVD mortality (38). Response to Reviewer 2: I would like to congratulate the authors with this well written and interesting manuscript. My comments were addressed adequately. I have a one additional comment regarding the authors response and one suggestion that I would like to be considered: 1. To my suggestion of calculating the statistical significance of the time trends the authors responded "We calculated time trends and plotted the incidence results as a means of providing a visual and conceptual description of the changes in CVD, HTN, and T2DM incidence from 2009 to 2018. To test differences in these trends, we used incident rate ratios (IRRs) between different segments of our study period. We have included the 95% confidence intervals of the IRRs which we calculated and used them as the basis to discuss the differences in time trends between the different age and sex strata." It is not entirely clear to me why these periods were chosen. If we look at Figure 1 showing the incidences during the total study period, for example some trends seem to be quite stable. Therefore, my suggestion would still be to calculate a p-value for a time trend over all the years as well (e.g. with Cochrane Armitage test for trend). Currently we only see relative differences between time periods, and this may be misleading especially when not specifying why these time periods were chosen. To elaborate, addition of the overall trends and/or argumentation for the chosen time periods prevents the reader from getting the impression of cherry picking, i.e. reporting no general trend and choosing these time periods instead of other time intervals because of their significance. We thank the Reviewer for this suggestion. Our study is exploratory and descriptive in nature. It includes the description of incidence trends over time and the calculation of incidence rate ratios (IRRs), which provide insight into whether CVD, HTN, and T2DM trends during a given time periods differ compared to a reference period. Given that our study considers incidence trends from 2009-2018, we divided our study period into two halves (2009-2013 vs. 2014-2018) with the first time period corresponding to the initial five years of the study period and the second time period corresponding to the final five of the study period. This division was a decision made by the research team as a relative means to test for potential differences between two time periods within the defined study period. We believe these IRRs, along with their corresponding confidence intervals, are the most appropriate way of providing a comparison of incidence rates between two periods. Particularly in the case of studies with large sample sizes, such as ours, a p-value, which by definition depends on sample size, can often be found to indicate a "significant" result even when the effect is small/clinically unimportant. IRRs and confidence intervals, on the other hand, provide an indication of the size of an effect along with its uncertainty. Therefore, we believe this is the most appropriate approach for our study. We have added a justification of this decision and an explanation of our reasoning to the Methods -Statistical analyses section of our manuscript: Methods -Statistical analyses: We calculated incidence rate ratios (IRRs) and their corresponding 95% confidence intervals (95% CIs) for each age-sex subgroup to analyze the differences in incidence between two sub-periods: 2009-2013 vs. 2014-2018. This division through the midpoint of the study period allowed us to compare incidence and observe potential differences between the first and second halves of the entire period. 2. Sex-stratified results were added where applicable. I noticed that the SES differences in incidence seem to be different for women and men (SES difference larger in women); or in other words, there may be an interaction between sex and SES. The authors may consider to add this observation if they feel it fits in the manuscript. We thank the Reviewer for this observation. We agree that a potential interaction between sex and SES is an interesting and pertinent consideration for future studies and have updated the Results section to mention this potential interaction for all three conditions and Discussion section to present this as a limitation: Results: A potential interaction between sex and SES was observed, as SES differences were consistently higher in women than men for both comparison points (e.
2023-05-26T06:17:51.463Z
2023-05-01T00:00:00.000
{ "year": 2023, "sha1": "efaa3dc55ebaa97859eb72f64119cc5f7f3f5c9f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ScienceParsePlus", "pdf_hash": "13206867e500c94b825fd050b4431bb38eb9497b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119267382
pes2o/s2orc
v3-fos-license
Structurally stable families of periodic solutions in sweeping processes of networks of elastoplastic springs Networks of elastoplastic springs (elastoplastic systems) have been linked to differential equations with polyhedral constraints in the pioneering paper by Moreau (1974). Periodic loading of an elastoplastic system, therefore, corresponds to a periodic motion of the polyhedral constraint. According to Krejci (1996), every solution of a sweeping process with a periodically moving constraint asymptotically converges to a periodic orbit. Understanding whether such an asymptotic periodic orbit is unique or there can be an entire family of asymptotic periodic orbits (that form a periodic attractor) has been an open problem since then. Since suitable small perturbation of a polyhedral constraint seems to be always capable to destroy a potential family of periodic orbits, it is expected that none of potential periodic attractor is structurally stable. In the present paper we give a simple example to prove that even though the periodic attractor (of non-stationary periodic solutions) can be destroyed by little perturbation of the moving constraint, the periodic attractor resists perturbations of the physical parameters of the mechanical model (i.e. the parameters of the network of elastoplastic springs). Introduction Networks of elastoplastic springs are increasingly used in the modeling of the distribution of stresses in elastopastic media [4,5], swarming of mobile router networks [2,12], and other physical phenomena. According to Moreau [11], the stresses of springs of such a network can be described by a differential inclusion (Moreau sweeping process) −y (t) ∈ N C(t) (y(t)), y(t) ∈ R m , where C(t) ⊂ R m is a closed polyhedron that plays the role of a constraint, if x ∈ C, and the dimension m equals or smaller than the number of springs in the network. Periodicity of the constraint C(t) corresponds to periodicity of the external loading applied to the given network of springs. The fundamental result by Krejci [9, Theorem 3.14] says that for C(t) of the form C(t) = C + c(t), where C is a convex closed bounded set and t → c(t) is a T -periodic vector-function, any solution of sweeping process (1) converges to some Tperiodic regime. For a class of continuum elastoplastic media with T -periodic loading the uniqueness of Tperiodic response is established in Frederick-Armstrong [6, p. 159]. Sufficient conditions for the uniqueness of the response in sweeping processes can be drawn based on Adly et al [1]. The non-uniqueness of the response for sweeping processes can of course be easily designed, see Fig. 1a, where one gets a family of periodic solutions by moving a rectangle normal to its sides back and worth. However, as shown at Fig. 1b That is why a natural question arises: whether or not any network of elastoplastic springs can always be slightly perturbed in way that destroys any potential family of periodic orbits in the respective sweeping process (1)? Fig. 2 A one-dimensional network of 5 springs on 5 nodes with one displacement-controlled loading. The circled digits stand for numbers of nodes. The regular digits are the numbers of springs. The thick bar is the displacement-controlled loading l 1 (t). The stress-controlled loadings f 1 (t), ..., f 5 (t) are applied at nodes. As uniqueness of the response lies in the core of reliability of modeling prediction (see e.g. [3,13]), the above-stated question is not of merely academic value. We introduce a simple example that answers this question negatively. Specifically, we show that the cyclically loaded network of elastoplastic springs of Fig. 2 leads to a sweeping process with a family of attracting periodic orbits. The paper is organized as follows. In the next section we define a network of elastoplastic springs formally. In section 3 we derive a sweeping process (1) that governs the quasi-static evolution of such a network. Section 5 is based on Moreau [11] and Gudoshnikov-Makarenkov [8]. It compiles a guide for closed-form computation of the quantities required for construction of a sweeping process of a given network of elastoplastic springs. This guide is then used in Section 6 to construct the sweeping process of the network of elastoplastic springs of Fig. 2. We rigorously proof (Proposition 2 and Corollary 1) that such a sweeping process admits a family of periodic orbits that persists under perturbations of the mechanical parameters of the network. A concise definition of a general network of elastoplastic springs We consider a network of m elastoplastic springs on n nodes that are connected according to a directed graph given by the n × m incidence matrix D . The Hooke's coefficients a 1 , ..., a m of the springs are arranged into an In addition the network comes with a collection of stress-controlled and displacement-controlled loadings respectively. The stress-controlled loadings are simply applied at the n nodes of the network and are supposed to satisfies the equation of static balance As for the displacement-controlled loading l k (t), k ∈ 1, q, we consider a chain of springs which connects the left node I k of the constraint k with its right node J k . To each displacement-controlled loading l k (t) we, therefore, associate a so-called incidence vector R k ∈ R m whose i-th component R k i is −1, 0, or 1 according to whether the spring i increases, not influences, or decreases the displacement when moving from node I k to J k along the chain selected, see Fig. 3. We assume that the displacement-controlled loadings Mechanically, condition (3) ensures that the displacement-controlled loadings don't contradict one another. For example, (3) rules out the situation where two different displacement-controlled loadings connect same pair of nodes. 3 A concise formulation of the sweeping process of a general network of elastoplastic springs In this section we follow Moreau [11] (see also ). If condition (2) holds, then there exists a functionh : R → R m , such that Then, under condition (3), there exists an n×q−matrix L, such that Introducing where U ⊥ = {y ∈ R m : x, y = 0, x ∈ U } , the space V becomes an orthogonal complement of the space U in the sense of the scalar product Therefore, any element x ∈ R m can be uniquely decomposed as where P U and P V are linear (orthogonal in sense of (7)) projection maps on U and V respectively. Define Assuming that both f : R → R n and l : R → R q are Lipschitz continuous, we get that h(t) and g(t) are Lipschitz continuous as well, so that the function is absolutely continuous for any absolutely continuous t → s(t). Theorem 1 [11] (see also [8]) Assume that the network of elastoplastic springs (D, A, C, R, f (t), l(t)) of section 2 satisfies the conditions (2) and (3). Assume that h : R → R m and g : R → R m given by (8)-(9) are Lipschitz continuous. Assume that safe load condition satisfies the differential inclusion (called sweeping process) It remains to note that, for Lipschitz continuous h : R → R m and g : R → R m , sweeping process (2) The shakedown condition The following conditions will rule out the existence of constant solutions. Proposition 1 [8, Proposition 3] Assume that conditions of Theorem 1 hold. If then sweeping process (13) doesn't have any solutions that are constant on [t 1 , t 2 ]. Remark 1 Note, the left-hand-side in the squared inequality (15) from the statement of Proposition 1 can be computed as A step-by-step guide to compute the quantities of the sweeping process from a network of elastoplastic springs In this section we again follow Moreau [11], but use the notations and additional properties established in Gudoshnikov-Makarenkov [8]. In particular, [ provided that (3) is satisfied. Step Step 2. The matrix V basis . According to (6), V basis is an arbitrary matrix of m − n + q + 1 = dim V linearly independent columns that solves Step 3. The matrix D ⊥ . Define D ⊥ to be an m × (m−n+1)−matrix of full rank that solves the equation Step 4. Other quantities. Using Steps 2 and 3, we can compute an (m − n + q + 1) × q-matrixL as It turns out that formula (8) can now be rewritten in closed-form as To account for all possible functions h(t) from (9) we will simply take h(t) as where H(t) is an arbitrary Lipschitz continuous control input. It is possible to compute H(t) in terms of f (t), but it is not of added value here. Finally, for Π(t) ∩ V we have where and e i ∈ R m is the vector with 1 in the i-th component and zeros elsewhere. 6 The sweeping process of the network of elastoplastic springs of Figure 2 The network of elastoplastic springs of Fig. 2 is given by The 5×3−matrix M that solves (18) and the respective 5 × 3−matrix (19) are found as and H(t) in (24) is an arbitrary Lipschitz continuous function from [0, T ] to R 3 . According to (17) and (20), one gets dim V = 5 − 5 + 1 + 1 = 2, Following Step 3 of section 5, we compute dim D ⊥ = 5 − 5 + 1 = 1 and the 5 × 1-dimensional solution of (21) is Therefore, according to formula (22), the 2×1−matrix L computes as and by (23) we get On the other hand, formula (26) says that for each i ∈ 1, 5, the normal vector n i is given by n i = Note, formulas (32) and (33) hold for any a i , i ∈ 1, 5 and any c − i , c + i , i ∈ 1, 5. Therefore, we see from formulas (32) and (33) that n 1 g(t) and n 4 g(t) for any values of the physical parameters of the network of Fig. 2. However, at this point we don't know whether or not the normals n 1 and n 4 have anything to do with the sides of the shape Π(t) ∩ V given by (25), as it may happen that the constraints of (25) provided by n 1 and n 4 become redundant for a particular h(t). Proof. Without loss of generality we can consider g(t) ≡ 0. Indeed, since g(t) acts along V , g(t) simply translates Π(t) ∩ V within V , so that g(t) doesn't change the shape of Π(t) ∩ V . Plugging (34) into (28) and using (24) we get Therefore, for the parameters (34), formula (25) says that x ∈ Π(t) ∩ V if and only if where Based on (33), n 1 = n 4 and n 2 = n 3 . Therefore, 1st and 4th lines of system (35) as well as 2nd and 3rd lines combine, that reduces the number of double-sided inequalities to 3. Substituting the expressions (33) with parameters (34) into (35) and plugging x = V basis v, where v ∈ R 2 , system (35) reduces to the following system normals n 1 and n 4 : 0 ≤ v 1 ≤ 0.5, normals n 2 and n 3 : Fig. 4 The gray region stays for the set of (v 1 , v 2 ) given by inequalities (37). The dotted lines denote the sets of (v 1 , v 2 ) where equalities of (37) are attained (the dotted line v 1 = 0 coincides with the vertical axis and another dotted line v 2 = −1.5 is not shown). Fig. 4 illustrates that the two constraints from (25) corresponding to normal vectors n 1 and n 4 constitute the opposite sides of the shape Π(t) ∩ V . This properties persists under small perturbations of the parameters (34). Indeed, formulas (25) and (33) imply that small perturbations of the parameters (34) lead to small parallel displacements of the dotted lines of Fig. 4 (without rotations), so that the two opposite parallel sides will stay. The proof of the proposition is complete. In order to obtain the existence of a structurally stable family of non-stationary periodic solutions it is now remains to apply the displacement-controlled loading (32) of sufficiently large amplitude. We will now use Proposition 1 to give an estimate for the required amplitude. In the case of a 5-spring network, formula (15) of Proposition 1 follows from In the case of parameters (34), formula (38) reduces to where n 1 is given by (36), or simply Since 160 3 ≈ 53.3, we introduce l(t) as follows extended to [0, ∞) by 108-periodicity. Corollary 1 Consider the network of elastoplastic springs of Fig. 2 with the parameters (34). Assume the displacement-controlled loading given by (39), so that T = 108. Then, for any parameters a i , c − i , c + i , i ∈ 1, 5, and any Lipschitz-continuous functions Tperiodic H : [0, T ] → R 3 , that are close to those in (34), and for any Lipschitz-continuous T -periodic l(t) close to (39), the sweeping process (13)-(14) admits a structurally stable family of non-stationary T -periodic solutions (swept by the opposite parallel sides of Fig. 4). Accordingly, the mechanical model of Fig. 2 admits an entire family of co-existing stress distributions that evolves T -periodically in time. Conclusions In this paper we showed that sweeping processes of networks of elastoplastic springs (elastoplastic systems) inherit a designated structure that restrict possible dynamic transitions. Specifically, we gave an example of an elastoplastic system whose sweeping process admits a structurally stable family of non-stationary periodic solutions. Specifically, the structure given by the elastoplastic system locks the family of periodic solutions of the associated sweeping process, so that it persists under all such small perturbations of the sweeping process that come from small perturbations of the physical parameters of the elastoplastic system. Compliance with Ethical Standards Conflict of Interest: The authors have no conflict of interest.
2019-03-05T18:17:35.000Z
2019-03-05T00:00:00.000
{ "year": 2020, "sha1": "4e2b090909d15183da9f40e2e8896427d7823d29", "oa_license": null, "oa_url": "https://www.sciencedirect.com/science/article/am/pii/S0167278919302222", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "096f452c4a2c70f2372a82ea4ab9c43116eba481", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
244154180
pes2o/s2orc
v3-fos-license
Elastic Full Waveform Inversion Based on the Trust Region Strategy In this paper, we investigate the elastic wave full-waveform inversion (FWI) based on the trust region method. The FWI is an optimization problem of minimizing the misfit between the observed data and simulated data. Usually, the line search method is used to update the model parameters iteratively. The line search method generates a search direction first and then finds a suitable step length along the direction. In the trust region method, it defines a trial step length within a certain neighborhood of the current iterate point and then solves a trust region subproblem. The theoretical methods for the trust region FWI with the Newton type method are described. The algorithms for the truncated Newton method with the line search strategy and for the Gauss-Newton method with the trust region strategy are presented. Numerical computations of FWI for the Marmousi model by the L-BFGS method, the Gauss-Newton method and the truncated Newton method are completed. The comparisons between the line search strategy and the trust region strategy are given and show that the trust region method is more efficient than the line search method and both the Gauss-Newton and truncated Newton methods are more accurate than the L-BFGS method. Introduction The physical properties of the earth can be inverted quantitatively by the reflected seismic waves. In modern seismology, traveltime inversion is the main olution for reconstructing media parameters. In FWI, the Fréchet derivative is required to estimate explicitly before proceeding to the inversion of the linearized system [3]. However, the computational cost of calculating Fréchet or gradient is too large. So some other methods have been taken into account. In [1] [2], the authors proposed to compute the gradient by solving an adjoint problem, which is considered as the initial of timedomain FWI. Since then, the FWI has been an important research topic in exploration geophysics. In the 1990s, the time-domain FWI was extended to the frequency domain [4] [5]. The frequency-domain FWI has the advantage of high computational efficiency and flexibility of data selection [6]. The FWI is actually an optimization iterative process to solve a nonlinear optimization problem. Generally, the computational cost of global optimization methods is extensive large. For this reason, local optimization algorithms are adopted. To avoid iteration falling to an unreasonable local minimum point, the multiscale method was proposed [7] in 1995. The multiscale method performs the inversion from the low frequency to high-frequency information step by step. It includes the long-wavelength information of the model and thus it enhances the robustness of inversion effectively. It can overcome the problem of local minima caused by the poor initial model selection. This technique has been successfully applied to the time-domain acoustic FWI, see e.g. [8] [9]. In order to reconstruct the long wavelength of the macroscopic velocity model, the Laplace-domain FWI is proposed [10]. A hybrid domain method combining the Laplace-domain method and the frequency-domain method together is also developed [11]. This method can be regarded as a special case of frequency-domain damping inversion. In elastic FWI, there are three different parameters, i.e., density and two Lamé parameters. In [12], the authors pointed out that different parameters have coupled effects on the seismic response which is called the trade-off effect or crosstalk. In order to tame the trade-off effect, several hierarchical inversion strategies are developed. In [13], Lamé parameters are inverted with fixed density first and then velocities are updated. In [14], all parameters are inverted simultaneously. However, the overall steplength is estimated by calculating an optimal steplength for every parameter individually. The Hessian matrix plays an important role in FWI. It contains the information of multiple scattering wavefields and the trade-off effects among different parameters. The off-diagonal blocks of Hessian reflect the relationship between different parameters [12]. The Newton-type method such as the quasi-Newton method, the Gauss-Newton method [15] and the truncated Newton method [16] all contain the Hessian information and so they behave better than the steepest descent method in accuracy. In the calculation of the gradient vector, the adjoint method is usually used [17]. The adjoint method can be extended to the truncated Newton method [18]. For the elastic wave equations, the wave propagator operator is symmetric. The back propagator operator and the forward propaga-American Journal of Computational Mathematics tor operator are identical. Besides that, the Gauss-Newton approximation of Hessian matrix is positive and definite. This property can be used in the trust region strategy. Our investigations in this paper show that the trust region method can accelerate the convergence rate and improve the inversion accuracy. In this paper, we investigate the application of the trust region strategy in FWI. The rest of our paper is organized as follows. In Section 2, the theoretical methods are described in detail. In Section 3, numerical computations and comparisons are presented. Finally, in Section 4, the conclusion is given. Theory In this section, we will introduce the forward problem first. Then we will describe FWI based on the trust region method. In order to solve the subproblem in the trust region algorithm, the two dimensional subspace method is described. Forward Method The source-free time-domain two dimensional (2D) elastic wave equations can be written as [19] , where ρ is the density, λ and µ are the Lamé parameters, ( ) where x v and y v are the particle velocity components in the x and y directions respectively, xx σ , yy σ , xy σ are the stress tensor components. Here we have added the source, i.e., the body force x f and y f vector to x v and y v components respectively. We rewrite the system (2.3) in the following matrix form The symbol † denotes the adjoint operator. Note that the determinant of the matrix C is positive. So from (2.4) and (2.5), we have The compact form of the forward problem is is the model parameters, w v σ . In (2.7), the wave propagator ( ) A m is symmetric. Given the media parameters ρ , λ and µ , the forward problem is to solve system (2.6) numerically. We have many numerical methods to solve system (2.6), for example, the finite volume method [20] and the staggered-grid method [21] and so on. In this paper, we use the staggered-grid method for its high computational efficiency and relatively easy code implementation. Since the computational domain is finite, the absorbing boundary conditions [22] are required to absorb the boundary reflections. In this paper, we adopt the perfectly matched layer (PML) method [23] [24]. The forward discrete schemes for solving system (2.3) are given in Appendix A. Full Waveform Inversion The FWI is a data-fitting procedure to find the model parameter m by minimizing the data misfit δ d between the simulated data cal w and the observed data obs w . The objective function can be formulated as where T is the total recording duration, s N and r N are the number of the sources and receivers respectively. Once the media parameters ρ , λ , and µ are inverted, we can yield the P-wave velocity and S-wave velocity by Neglecting the summation in (2.8) and introducing the restriction operator R at receiver points, then the gradient of the objective function with respect to the model parameter m is where † is the adjoint operator. Furthermore, the Hessian matrix can be obtained by taking derivative of (2.11) with respect to the model parameter The expression of Hessian H is divided into two parts, i.e., 1 H and 2 H . The first part 1 H is the Gauss-Newton approximation of Hessian [15], which just contains the 1st-order derivative of the wavefield. The second part Gradient Calculation The gradient of the objective function can be obtained by the Lagrangian formulation [17] [18] [28] or the perturbation theory [29]. Following the perturbation theory [29], the gradient of the objective function with respect to model parameters can be rewritten as where : We solve the forward problem based on (2.3) instead of (2.1)-(2.2) by the staggered-grid method. So it is necessary to express the gradient (2.13)-(2.15) with the particle velocity and the stress tensor. The results are the following σ , yy σ and xy σ are the solutions of corresponding adjoint problem. The derivation is given in Appendix B. Trust Region Method There are two strategies to solve a minimization problem. One is the line search method and the other is the trust region method. In order to describe the trustregion method more clearly and directly, we consider minimizing an abstract Then the line search strategy is The line search chooses a direction k p first and then finds the step length k α along this direction. The direction may be the steepest descent direction, the Newton direction and so on. After fixing the search direction, the search length Usually, the inexact line search methods such as the Wolfe method and the interpolation method are used [26] since solving (2.20) exactly is too expensive. A popular inexact method is the Wolfe method [30]. In this paper, we use the Wolfe method. In the trust region method, information gathered from objective function ( ) f x is used to construct a model function k f  whose behavior near the current point k x is similar to that of the objective function k f . Because k f  may not be a good approximation of f when x is far from k x , we restrict the search for a minimizer of k f  to some region around k x . Namely, we find the step p by approximately solving the following subproblem where k ∆ is the radius. This is the trust region subproblem and we solve it by the two dimensional subspace method [26]. Note that the two dimensional subspace method requires the Hessian is symmetric and positive. In a sense, the line search method and the trust region method differ in that the line search method starts by fixing the direction k p and then searches the step length k α whereas in trust region the direction k p and the search step k α are determined together subject to the trust radius k ∆ . Theoretically, the trust region is easier to reach quadric convergence than the line search method. More theoretical details can be found in the references (e.g., [31] [32] [33]). One key step in the trust region method is the choice of the trust region radius k ∆ at each iteration. The choice is based on the agreement between the objective function f and the model function k f  at previous iteration. Given a step k p , we define the ratio k ρ as where the numerator is the actual reduction, and the denominator is the predicted reduction. If k p reaches the boundary of the trust region or the ratio k ρ is satisfactory enough, the radius is increased by a constant time: If the ratio k ρ is too small, which means that the model function k f  is not a good approximation to the objective function f under the radius, then we reduce the trust region radius: In this paper, we choose 1 2 α = and 2 1 4 α = . FWI Algorithms In this section, we present the algorithm of the truncated Newton method with the line search strategy and the algorithm of the Gauss-Newton method with the trust-region strategy. Numerical Computations In this section, numerical computations are presented to illustrate the advantages of the trust region method. We also present the inversion results by the L-BFGS method for comparisons. We choose the L-BFGS method to make comparison as it is the most popular quasi-Newton method. We consider FWI for the Marmousi model which is usually applied to test the ability of an imaging or inversion algorithm. The exact Marmousi model is shown in Figure 1. Figures 1(a) where For the elastic wave FWI, the density is more sensitive than the velocities. Some hierarchical inversion strategies developed by [13] and [14] can be applied. In our FWI computations, we don't apply these strategies. We invert the three media parameters simultaneously. However, a multiscale technique by implementing the inversion from low frequency to high frequency is still used. Moreover, the data of the next high frequency band include the data of the previous low frequency band which guarantees the robustness of inversion. In our computations, four groups of frequencies are used to complete inversion step by step, i.e., 2 Hz, 5 Hz, 10 Hz and 20 Hz. For every stage, the stopping criterion is where ( ) In Table 1, we present the iteration number at every stage (i.e., 2 Hz, 5 Hz, 10 Hz and 20 Hz) for the L-BFGS method, the Gauss-Newton (GN) method and the truncated Newton (TN) method by the line search (LS) strategy or the two dimensional subspace (Sub) strategy. From Table 1, we can see that the Gauss-Newton method has obvious advantage over other methods from point of total iteration number. In Table 2, we compare the computational efficiency for different methods. The total computational time and the time of each iteration are listed in Table 2. The total iteration number is also listed for convenience. As we can see, the Gauss-Newton method with the two dimensional subspace method costs least computational time for each iteration. In Figure 7, we present a profile comparison of the inversion results at Table 3, we present 2 l errors and relative 2 l errors for the different methods. The relative error ε is defined by Conclusions In FWI, the line search strategy is usually applied. In this paper, we have investigated the application of the trust-region strategy to elastic wave multi-parameter inversion. The theoretical methods and algorithms are described in detail. The trust-region subproblem is solved by the two-dimensional subspace method. Numerical From the relation of strain and stress [19], we have 2 ,
2021-11-17T16:12:10.631Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "67c1838f6865b34e0442d9f04ec513f146f117b4", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=113126", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "6173a7339607cd9079deb2fc905a413131a567cd", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [] }
245108140
pes2o/s2orc
v3-fos-license
Probing the Effect of Selection Bias on Generalization: A Thought Experiment Learned systems in the domain of visual recognition and cognition impress in part because even though they are trained with datasets many orders of magnitude smaller than the full population of possible images, they exhibit sufficient generalization to be applicable to new and previously unseen data. Since training data sets typically represent small sampling of a domain, the possibility of bias in their composition is very real. But what are the limits of generalization given such bias, and up to what point might it be sufficient for a real problem task? Although many have examined issues regarding generalization, this question may require examining the data itself. Here, we focus on the characteristics of the training data that may play a role. Other disciplines have grappled with these problems, most interestingly epidemiology, where experimental bias is a critical concern. The range and nature of data biases seen clinically are really quite relatable to learned vision systems. One obvious way to deal with bias is to ensure a large enough training set, but this might be infeasible for many domains. Another approach might be to perform a statistical analysis of the actual training set, to determine if all aspects of the domain are fairly captured. This too is difficult, in part because the full set of variables might not be known, or perhaps not even knowable. Here, we try a different approach in the tradition of the Thought Experiment, whose most famous instance may be Schr\"odinger's Cat. There are many types of bias as will be seen, but we focus only on one, selection bias. The point of the thought experiment is not to demonstrate problems with all learned systems. Rather, this might be a simple theoretical tool to probe into bias during data collection to highlight deficiencies that might then deserve extra attention either in data collection or system development. Introduction We are motivated in this paper by two observations. First, there seems to be little effort in the machine learning field to routinely validate training data in the sense of ensuring that all important data variations are sufficiently captured statistically and without bias. This deficiency has famously manifested itself in a variety of learned systems that make spectacular errors widely publicized in international media. Second, there is the belief that a system's generalization properties can be ameliorated via manipulations of the given training data, and there is a huge variety of these from simple to very sophisticated (Shorten & Khoshgoftaar, 2019). Many claim that these suffice to remedy any problems arising due to data biases, yet those same problems seem to persist because generalizations cannot deal even with tiny adversarial manipulations (e.g., Xu et al. 2020). Our goals for this paper therefore are first, to begin a discussion of these points and their implications, especially for practical systems where amelioration on its own is not a sufficient goal because there are engineering specifications to be met, and second, to suggest a simple idea for determining which gaps in training data sampling might have important impact on system performance. It is self-evident that how an artificial agent of any kind perceives its world is critical to how it reacts to events in its world. Within perception, we focus on vision as the primary perceptual source for humans and a capability that requires roughly half of the cerebral cortex to accomplish, thus a major contributor to an agent's overall intelligence. Vision is also a major perceptual source for robots or other artificial agents. We were also concerned that the explosive development of training sets for learned systems is insufficiently rooted in sound statistical science, particularly with respect to bias which for the tiny sizes of datasets (tiny relative to the full population) is likely a much larger problem than is being acknowledged. In other words, it might be that the current empirical methodologies seen in research and applications might benefit from improvement of their data sampling components. It is certainly true that in vision, no current training set fully captures all dimensions of the representation of visual information. This may lead to Selection Bias which results when the selection of data samples leads to a result that is different from what would have been achieved had the entire target population been considered. We run a thought experiment on a real domain of visual objects that we can fully characterize and look at specific gaps in training data and their impact on qualitative performance as well as their impact on the engineering specifications required for acceptable performance in the domain. Our thought experiment points to three conclusions: first, generalization behavior is determined by how well the particular dimensions of the domain are represented during training; second, the utility of generalization is dependent on the acceptable system error; and third, specific visual features of objects, such as pose orientations out of the imaging plane or colours, may not be recoverable if not represented sufficiently during training. Any currently observed generalization in modern deep learning networks may be more the result of coincidental alignments than a true property of the network or its training regime, and whose utility needs to be confirmed with respect to a system's performance specification. Whereas empirical confirmation may be the best method for such verification, it is a time and resource expensive activity. Our Thought Experiment Probe approach, coupled with the resulting Bias Breakdown, is far less costly and can be very informative as a first step towards understanding the impact of biases. Further, it could be an important supplementary information component for training systems with critical performance requirements. Current Thinking about Generalization in Learned Systems Generalizability refers to the performance difference of a model when evaluated on previously seen data (training data) versus data it has never seen before (testing data). Models with poor generalizability have overfitted the training data. A deep learning image classifier is a mathematical function that maps images to classes. Many such classifiers have been applied productively in a variety of domains. A major reason for their success has been that they do not need to be trained with the full domain of the function in order to exhibit strong classification performance for previously unseen input. The reasons underlying this have been studied for some time yet are not well understood. Bartlett & Maass (2003) write that the VC-dimension of a neural net with binary output measures its 'expressiveness'. The derivation of bounds for the VC-dimension for both binary and real-valued neural nets has been challenging. Bounds for the VC-dimension of a neural net provide estimates for the number of random examples that are needed to train a network so that it has good generalization properties (i.e., so that the error of the network on new examples from the same distribution is at most ε, with probability ≥ 1 -δ). For a single application these bounds tend to be too large, since they provide such a generalization guarantee for any probability distribution on the examples and for any training algorithm that minimizes disagreement on the training examples. This might seem to be what we seek, but this is not the case. As will become apparent, we are interested in the cases where the training and test distributions differ in their specific detail. There are many who have written about network generalization and some recent examples follow. Among some of the most cited are Zhang et al. (2016), Keskar et al. (2016) Neyshabur et al. (2017), Kawaguchi et al. (2017), Wu & Zhu (2017) and Novak et al. (2018), and most recently Fetaya et al. (2020). Zhang et al. presented a simple experimental framework for defining and understanding a notion of effective capacity of machine learning models and showed that the effective capacity of several successful architectures is large enough to memorize the training data. They also distinguish optimization from generalization, arguing that formal measures to quantify generalization are still missing. Neyshabur et al. (2017) discuss different candidate complexity measures that might explain generalization in neural networks. They reach no specific conclusions, however, showing how much is unresolved. Fetaya et al. (2020) reveal that it is impossible to guarantee detectability of adversarially perturbed inputs even for near-optimal generative classifiers. Experimentally, they were able to train robust models for MNIST, but robustness completely breaks down on CIFAR10. They suggest that likelihood-based conditional generative models are ineffective for robust classification. Zhao et al. (2019) examine generalization in deep reinforcement learning and focus on three issues. First, they look at generalization from a deterministic training environment to a noisy and uncertain testing environment. Secondly, assuming one correctly models environmental variability in a training simulation, there is the question of whether an agent learns to generalize to future conditions drawn from the same distribution, or overfits to its specific training experiences. Thirdly, there is the effect on training due to the impossibility of predicting and accurately modeling the environmental conditions and variability an agent might encounter in the real world (i.e., a predictive model should be robust to, for example, camera type, initial agent state, etc.). They show that standard algorithms and architectures generalize poorly in the face of both noise and environmental shift. They propose a new set of generalization metrics as a way of assisting with these problems. It should be pointed out that there is an implicit assumption in this paper, common in most similar works, that the domain of interest can be effectively characterized and parameterized in the first place; this is not possible in general (see also Barbu et al. 2019 andAfifi &Brown 2019 for additional evidence). Others, such as Senhaji et al. (2020), consider multi-task learning and multi-domain learning as a way of providing behaviour that goes beyond a single task or domain while also minimizing the number of needed parameters. However, like all others, the implicit assumptions that a given domain can be 'known' with respect to its parameters and that training sets adequately cover that set of parameters are made. Both assumptions are not always justified. Bengio & Gingras (1996) consider some of the input variables that are given for each particular training case, and the missing variables differ from case to case, and suggest that the simplest way to deal with this missing data problem is to replace the missing values by their unconditional mean. More recently, Van Buuren (2018) details many methods for imputing missing data and their pitfalls as do Lin & Tsai (2020) who advocate for the development of novel hybrid approaches by combining statistical and machine learning based techniques, the consideration of three evaluation metrics together, and missing data simulation for both training and testing datasets. Śmieja et al. (2018) propose that missing data can be handled by replacing a neuron's response in the first hidden layer by its expected value, thus proposing a neural/unit-level embodiment of Bengio & Gringras's proposal of using the unconditional mean. Chai et al. (2020) fill gaps in a sparsely sampled dataset by fitting curves to the data and interpolating. García-Laencina et al. (2010) review methods for pattern classification when there is missing data. They group previous research into 4 categories: 1) methods that delete incomplete cases and develop classifiers using only the complete data portion; 2) methods that estimate missing data and learn classifiers using the edited set, i.e., complete data portion and incomplete patterns with imputed values; 3) methods that use model-based procedures, where the data distribution is procedurally modeled, e.g., by expectation-maximization (EM) algorithm; and, 4) methods where missing values are incorporated into the classifier. These seem to imply that one knows that there is a missing feature or data in advance, and this seems an unreasonable assumption in general. Polikar et al. (2010) claim to address the missing feature problem but admit that they draw an equivalence between missing data and missing features. They assume that the complete set of variables in a feature vector is known but that data samples might have missing values for some. The Issue of Bias in Data Issues regarding bias in data are common in any discipline that relies on statistical information based on observational data (e.g., genetics, Balding (2006); epidemiology Gordis (2014)). In epidemiology, bias has been defined as any systematic error in the design, conduct or analysis of a study that leads to an erroneous estimate of an exposure's effect on disease. Such bias reflects systematic errors in research methodology. The types of bias include Selection, Detection, Observation, Misclassification, and Recall. These authors highlight that bias is a complex and well-studied topic (see the classic Sackett (1979) paper or in the more recent Grimes & Schulz (2002), among many others). These mostly examine statistical issues in medicine; care is needed if relating these to Machine Learning, computer vision or AI. Nevertheless, much is clearly instructive since that community has struggled with statistical bias for decades in the context of life-and-death decision-making. In other words, there is a very real cost if error is due to bias; sometimes, especially in the published literature, much of computer vision or AI research currently may not fully consider such costs. Sackett's (1979) highly cited paper provides a catalog of biases. Bias may be introduced at any stage of a research enterprise. He gives the following high level types (in his words): i. In reading-up on the field (5) ii. In specifying and selecting the study sample (22) iii. In executing the experimental manoeuvre (or exposure) (5) iv. In measuring exposures and outcomes (13) v. In analyzing the data (5) vi. In interpreting the analysis (5) vii. In publishing the results. For 6 of these 7 categories, he gives sub-classes, their number appearing in parentheses after each in the above list. In sum, he points to a total of 56 types of bias that might impact a research effort. The use of the term 'exposure' in the above is appropriate for medicine (as in exposure to a virus, or to a drug). In computer vison or AI, the 'experimental manoeuver' corresponds to the attempt by the computer system to solve or perform a task. Throughout the history of AI research, including all of its sub-fields, has this level of analysis with respect to bias ever been considered? Not really. Should it be now? Yes, especially as applications are beginning to touch safety-sensitive domains. In the current presentation, we focused primarily on the second item in Sackett's list, bias in specifying and setting up the study sample. Even within this category he lists 22 sub-classes. It is not germane to our argument to list all here and the interested reader is encouraged to seek out the original paper. It is very useful however, to give a few for which it is more straightforward to cast their characteristics from medicine into the computer vision or AI domain. Here are 5 of the 22, in his words (in italics) followed by a possible interpretation relevant for AI: Popularity bias: The admission of patients to some practices, institutions or procedures (surgery, autopsy) is influenced by the interest stirred up by the presenting condition and its possible causes. In other words, if many researchers, companies or governments are interested in a particular type of image, (e.g., challenges sponsored by a conference or by a prize, the financial lure of start-ups and funding competitions, etc.) the more likely are samples of that type (or studies of this type as a whole) are to be prioritized and others excluded. Wrong sample size bias: Samples which are too small can prove nothing: samples which are too large can prove anything. How does one determine sample population size for a given task and domain? Most in the field (perhaps all) follow the maxim 'the more data the better' (e.g., Anderson 2008) with little regard to its suitability for the desired outcome. Missing clinical data bias: Missing clinical data may be missing because they are normal, negative, never measured. or measured but never recorded. Tsotsos et al. (2019) show how the range of possible camera parameters are not well-sampled in modern datasets and how their setting makes a difference in performance. Similarly, datasets are not typically annotated with imaging geometry or lighting characteristics for each image. Yet it is clear each affects an image. Finally, there are many cases of particular data being missing, and several research efforts to ameliorate this problem were given in Section 1.1 above. These are just 5 of the 22 sub-categories. The point is to highlight that bias in how one decides on training sets is a well-known issue and has been studied for decades. The reader may disagree with our specific interpretations (and we invite better interpretations) but the existence of these potential sources of bias are well-accepted and undeniable. Acknowledging and incorporating this thinking into machine learning can only benefit the community and the applicability of the results to real-world, safety-critical problems. The remainder of this paper will consider Selection Bias in general and propose a theoretical framework for exploring how the impact of specific training data 'holes' might impact generalization with respect to how it measures up to a required performance specification. Setting Up the Problem None of the approaches in Section 1.1 seem to have made a definitive difference on the overall issue. None ask the question If a network is trained with a biased dataset that misses particular samples corresponding to some defining domain attribute, can it generalize to the full domain from which that training dataset was extracted? It is important to note that due to the impossibly large space of input it is not feasible to empirically prove, using exhaustive testing, that generalization abilities are complete for any network. Perhaps a different perspective might be helpful. For our purposes, we need the following. First, we note that our focus is on visible light images taken with conventional camera systems. Although the conclusions might apply to other kinds of images (LiDAR, thermal, etc.), these are not considered. Other domains are also not considered, such as NLP, but a similar analysis might be as applicable. Second, the kinds of generalization errors the above papers report, let alone the empirical error, seem to not be in the kinds of ranges one might expect for high performance engineering tasks. If a system has a maximum empirical error that suffices to pass an engineering specification, then generalization error should also be at that same specification. We will perform our analysis with this as a key consideration. Third, we note that any image training or test set is a subset of the set of all possible discernable images. This might be trivially obvious, but there is an important point here. Consider images of size 32 x 32 pixels with 3 bits per pixel (one bit per color). The number of possible images of such size is 2 3072 or about 10 300 . The number of images with three bytes per pixel and 256 by 256 size is about 10 500,000 . The trouble with such estimates is that most of these images do not represent scenes that could possibly have been captured by a camera. Pavlidis (2009), who published the justmentioned example, concludes that it is impractical to construct training or testing sets of images that cover a substantial subset of all possible images. He estimates that 5 36 is a lower bound on the number of discernible images (5 36 is about 1.5 x 10 25 ). Given this, even ImageNet whose April 30, 2010 size is documented at 14,197,122 images, is 18 orders of magnitude too small. For a particular domain (e.g., face classification), the size of all possible discernable images that contain faces is much, much smaller, but still must be quite large. Making assumptions about face location (e.g., is it centre in the image?), face pose (e.g., is it a full frontal head shot? does the head tilt?), image background (e.g., is it on a plain background?), etc., reduces the domain significantly but do they do so sufficiently? Biases due to other factors are well-documented to cause problems (skin tone, lighting, hair, glasses, masks, etc.). The conclusion here is that the domain of images, even if application-specific, can still contain far more instances than practical to represent or use. More importantly, the set of attributes that fully define the domain may be unknown (or unknowable). Even though there are many effective approaches to creating a training set, bias seems not well-handled. It seems quite easy to not include one or more of a domain's attributes or to not have sufficient representation for a particular one within a training set. This is particularly true for 'in the wild' efforts. Missing data would mean that for some domain attribute, its value in a particular training sample is missing (the color of the traffic light is missing, there is no cloudy scene, etc.). No learning method can possibly know in advance how many attributes define a given domain, nor can it know if enough training data is provided to sufficiently enable a high-quality feature extraction process. For example, suppose it is possible to collect a large number of images of several classes of objects in natural scenes but with the light source always in the same position (i.e., identical imaging geometry, such as if all images are taken on clear, sunny days at around 9am). Would a network trained on this dataset generalize to images of those same objects in the same scenes but with differing imaging geometries (e.g., different light source position with respect to the scene and camera)? What about different camera parameter settings? There already is at least one set of studies to show that camera settings make a very large difference to algorithm performance, both classical algorithms and deep learning algorithms (Andreopoulos & Tsotsos, 2011;Wu & Tsotsos 2017;Tsotsos et al. 2019). The initial training set could be considered biased with respect to the dimension of scene lighting since it did not provide the network with samples of many different variations. Similar situations can be considered for a large variety of other image variations (object size, object, color, object 3D pose, contrast with background, etc.). It is very difficult to think that training sets can be easily defined that provide sufficient training samples along each possible variation. Any practical training process for a non-trivial and not artificially constrained domain is likely biased. As noted earlier, the types of bias include Selection, Detection, Observation, Misclassification, and Recall bias. For our purposes, the type of bias we focus on is Selection bias (other forms of bias remain for future research). Selection bias can result when the selection of subjects leads to a result that is different from what would have been achieved had the entire target population been considered. In vision, a training set is a very tiny subset of the full population of all possible images. The problem is worse when one considers the feature level because of the combinatorially large number of feature combinations (e.g., Tsotsos 1989). Ahmad & Tresp (1993) also point out this combinatorial nature and conclude that for each of these combinations, enough patterns must be included to accurately estimate the posterior density. The satisfaction of this may not be possible in practice unless the domain is small and very well-defined. This is where the generalization properties observed in learned networks are hoped to help. However, the limitations of this hope are not well understood. The Approach Our main focus will be on the effect of domain attributes that are unrepresented in training sets due to the potential intractability of complete representations as mentioned above (or perhaps for other reasons as well). This includes any training dataset where for some defining attribute of the domain of interest, one or more of its possible values has no instance (e.g., if color is an attribute then 'red' is a possible value but there is no instance of it in the training set). To address this problem empirically seems an intractable problem. We thus approach this in the tradition of the Thought Experiment, whose most famous instance is perhaps Schrödinger's Cat. Our motivating question was given in the abstract: What are the limits of generalization given such bias, and up to what point might it be sufficient for a real problem task? To be more specific, If a network is trained with a dataset that misses particular values of some defining domain attribute, can it generalize to the full domain from which that training dataset was extracted while maintaining its performance accuracy? In vision, no current training set fully captures all properties of the representation of visual information 2 . We run this thought experiment on a real domain of visual objects, LEGO ® bricks 3 , that we can fully characterize and look at specific gaps in training data and their impact on qualitative performance as well as their impact on the engineering specifications required for acceptable performance in the domain. We reach the following conclusions: first, that generalization behavior is dependent on how sufficiently the particular dimensions of the domain are represented during training; second, that the utility of any generalization is completely dependent on the acceptable system error; and third, that specific visual features of objects, such as pose orientations out of the imaging plane or colours, may not be recoverable if not represented sufficiently during training. We proceed as follows. We will define a hypothetical task in a real problem domain, the domain of LEGO ® bricks. We will then assume the task can be sufficiently solved by a correctly trained neural network. The training dataset will then be manipulated, removing particular attribute values and the assumption will be made that the network now trained with the altered dataset will generalize so that it performs as well as the original. The resulting network will be examined in order to determine how the generalization might come about. Specifically, the approach is Reductio ad absurdum via Backcasting, and this will be further explained below. Our analysis will tie the feasibility of generalization for particular domain attributes to the target performance specifications, because for any applied domain, this seems a critical consideration. First, we specify our hypothetical problem domain and task. The Domain of LEGO ® Bricks Suppose the well-known toy manufacturer, LEGO ® , wishes to take advantage of computer vision advances to open a new premium service that allows customers to order any combination of bricks in their inventory. They wish to visually recognize LEGO ® bricks as they pass in front of a camera on a conveyor belt in order to sort them into bins 2 Many datasets have quite broad coverage but we are unaware of any that cover the full range of possible direct and ambient illuminations or viewpoints, for example, for any particular domain. 3 LEGO ® played no role in this research, and nothing in this manuscript can be considered as representing any aspect of the company or its products. The choice of these building blocks, as opposed to others, is due to their familiarity around the world as well as to the extensive public documentation provided by many authors on the WWW. depending on a specified definition (e.g., by color or by shape, or by number of studs). The full variety of pieces is considered including many special purpose ones as the figures below will show. The bin definitions may vary depending on customer requests. A customer might order 10,000 LEGO ® bricks all of a particular color but randomly selected otherwise. Another might wish all bricks that are of the 'plate' category with fewer than 12 studs, but of all colors. In order to visually recognize the bricks, we will use a deep learning approach; let's assume the learning system we employ works perfectly and that training error equals zero. We also assume that the standard procedure to reach the best generalization is followed, as described in many of the papers cited earlier. It must be stressed that we do not seek a solution to this task (which would likely be straightforward). Rather, we use this scenario as the backdrop of a theoretical probe into generalization. The LEGO ® Bricks World What is the dimensionality of the LEGO ® bricks domain? LEGO ® bricks can be fully characterized by the following: color, size, type, sub-type, dimension (m x n) in studs, and part attributes. Fortunately, others have put much effort in documenting this domain. An online source of container labels that gives a very broad coverage of bricks that have actually been produced over the lifetime of LEGO ® is given by Boen (2018) There are also label pages for the Technics ® series of bricks (Technic ® Bricks, Technic ® Beams, Technic ® Axles, Technic ® Gears, Technic ® Pins & Connectors), but these are not included here, and the count reflects only the classic LEGO ® collection. It should be noted as will be seen in figures below, that a large assortment of unusual formations is included in these collections; all blocks shown in the following figures are from one of the above categories in Boen's catalog. N in the name of the category in the left column above, refers to the number of studs per brick (e.g., a 2x6 plate is a flat piece with studs placed 2 across and 6 deep). Some of Boen's labels are for bins that store pairs of similar pieces and these are not included in the above count. The counts given are of the unique different brick types in this label dataset; the total number of sub-types is 520. Many pieces do not neatly fit into simple categories, even though they are so included, and examples will appear below. There is a separate label page of all possible brick colors in the Boen (2018) catalog, Color Bars, and these total 141. Since any brick may have any color, the number of possible unique LEGO ® pieces is 520 x 141 = 73,320. This is the space of all possibilities but from a practical standpoint, not all are in current production. Diaz (2008) provides the following estimates: • LEGO ® went from 12,000 different pieces to 6,800 in the last few years, a number that includes the color variations • Staple colors are red, yellow, blue, green, black and white. We thus assume equal numbers of each of 6 colorsthis means 1133 unique brick types. • Approximately 19 billion LEGO ® elements are produced per year. 2.16 million are molded every hour, 36,000 every minute. • 18 elements in every million produced fail to meet the company's high standards. For the purposes of our argument, this is sufficient to provide a good characterization of the size of the domain and of its dimensions (even though they are only coarsely specified or there are variations). For example, in the Bricks 1xN type, there are 9 simple types, i.e., 1x1, 1x2, 1x3, 1x4, 1x6, 1x8, 1x10, 1x12, and 1x16, but also 40 more complex shapes with varying counts of studs. The number of studs could be used as a dimension on its own, but it would only apply to a subset of all the types. Our argument will not depend on the exact ranges of each dimension. Detection and Binning Scenario The following are elements that define the scenario for our problem of sorting bricks into bins depending on a specified bin definition that satisfies customer requests 5 : • There is a conveyor belt moving at a speed compatible with the speed of production of the bricks, i.e., using the data described earlier, so that 600 elements can be processed per second. It is acceptable that many conveyor belts might be used operating in parallel. • Lighting is fixed; camera position and optical parameters are fixed and known; the camera's optical axis is perpendicular to the conveyor belt and centred at the belt's centerline (camera is directly overhead and pointing down to the center of the conveyor belt); shadows are minimized or non-existent; the appearance of the conveyor belt permits easy figure-ground segregation. • An appropriate system exists for understanding a customer request (e.g., "I need 500 bricks, only yellow and with fewer than 6 studs") and deploying the appropriate systems for its realization. • The pose of each brick on the conveyor belt may be upright, upside down, or on one of its other stable sides. Each brick has its own set of stable sides on which it might lie. Each brick would have at least 2 stable positions perhaps to a maximum of 6 (sides of a cube). We conservatively assume 3 as the average, but this is likely low. • When on their stable position, bricks may be at any orientation relative to the plane of the conveyor belt; we assume the recognition system understands orientation quantized to 5°, i.e., 72 orientations (360 orientations seems unnecessarily fine-grained whereas 4 seems too coarse. The assumption is made to enable the 'back-of-theenvelope' kind of counting argument made here and could easily be some other sensible value. It also seems sensible due to the restrictions on camera viewpoint described earlier) • We assume that bricks do not overlap and thus do not occlude each other for the camera. • The number of possible images using the complete library of bricks can be given by 3 (poses) x 520 (brick types) x 141 (colors) x 72 (orientations) = 15,837,120 (single instance of each part at each orientation and pose). In our argument, we will use the actual production numbers described earlier, namely, 3 (poses) x 6800 (bricks of the standard 6 colors) x 72 (orientations) = 1,468,800. Examples of user requests could be 'all slopes', 'all 1x4 plates', 'all bricks of color C' or 'all bricks with more than 4 studs', or other similar requests. There may be more than one task executed concurrently, but this is not relevant to our argument. We further assume that the error for this binning task needs to be very small. LEGO ® brick manufacturing error is 1.8 x 10 -5 (from Diaz 2008) and it is assumed that any additional error associated with satisfying these user requests should not add to this error appreciably. The number of possible images is easily manageable by contemporary infrastructure and also seems to be within the reach of the learning/memorization ability of current neural networks. In other words, a product manager at LEGO ® may just demand that such a dataset be generated and that a big enough neural network be used. However, our goal is to use this network as an experimental vehicle to explore generalization and we will do this by investigating how generalization error is affected by deficiencies (selection biases) in training data. The Thought Experiment: Setup Thought experiments generally: • challenge (or even refute) a prevailing theory, often involving proof via contradiction (Reductio ad Absurdum); • confirm a prevailing theory; • establish a new theory; • simultaneously refute a prevailing theory and establish a new theory through a process of mutual exclusion. The tactic we will employ here is Reductio ad Absurdum via the methodology of Backcasting (Robinson 1990) in order to challenge the prevailing belief that deep learning networks generalize in a useful manner and to probe into generalization behavior. Backcasting involves establishing the description of a very definite and very specific future situation. It then involves an imaginary moving backwards in time, step-by-step, in as many stages as are considered necessary, from the future to the present to reveal the mechanism through which that particular specified future could be attained from the present. See Figure 1 for a graphical depiction. Backcasting is not concerned with predicting the future; the major distinguishing characteristic of backcasting analyses is the concern, not with likely futures, but with how desirable futures can be attained. Assume as the starting point for our backcasting tactic, that the recognition task for the LEGO ® brick binning problem can be learned, and that when trained and validated on the full set of samples as defined by the above description, network A is produced with empirical error less than 1.8 x 10 -5 (the overall manufacturing error set by LEGO ® ). Of course, this might mean that the full set is memorized; and since it is the full set, no generalization is even needed. But this is not relevant here. It does not matter how the network achieves this accuracy, only that it does. We then probe the speculation that if training of the same architecture is biased, its generalization properties will overcome the bias with the same acceptable performance level. We will examine how this speculation might be verified by examining the logical functions that would be needed in the architecture and its performance under that speculation. If we fail the performance specification, this would be the proof by contradiction (Reductio ad Absurdum) that our speculation was flawed. An important assumption is that regardless of the network configuration, the network includes all the classes that would be required to satisfy any user query (classes for brick shape, color, or size defined by number of studs). Supervised learning would include all the required known classes. In other words, the range of possible user queries defines the output classes of the network. We also assume that any user query is parsed correctly. Then, to create the specific consequent required by backcasting, the exact architecture (specifically all output classes required for any user query are kept intact) of this network is re-initialized and re-trained with a biased training set. This hypothetical may seem worrisome, but this is part of the imagination of the experiment. If, for example, a complete re-training of a new network was undertaken with the biased dataset, it might be that one or more output classes corresponding to specifics of user queries would not be present. This would of course be unacceptable for the application, but also would not allow the thought experiment to proceed. Such a scenario reminds of Xu et al. (2019), where they conclude that if labels for a particular category are absent from a dataset, the network will treat the unlabeled object in the image as background or 'other'. In our case, background would mean the conveyor belt and we assume labels also exist. If a network effectively generalizes, this means that its test error is less than some bound determined by the problem domain. For the LEGO ® problem domain, the empirical error bound can be assumed as being less than 0.000018 based on the problem domain manufacturing error described earlier. The goal is to minimize generalization error, the difference between expected error and empirical errors, G. As mentioned earlier, the binning process should not exceed such a level of error, and therefore, we will assume G < 0.000018 is a reasonable target for overall error. Assume the newly trained system A (the system responsible only for visual recognition of the bricks) satisfies this target; GA < 0.000018. The subscript on G will denote the particular network being considered. The main concern is generalization of the visual recognition performance under different training biases. No assumptions are made about any particular capabilities of the network. For example, CNN's are by design somewhat location insensitive. However, all LEGO ® blocks are displayed centrally so this is not relevant. No particular knowledge about the task is present in the network before the learning phase. Why is this a reasonable test? It is a common complaint that datasets for autonomous driving, for example, often are predominantly created in good weather, with no snow or fog or heavy rain or hail, with little traffic, or have other idiosyncrasies that make their datasets represent incomplete training populations. Although the full domain for driving necessarily contains all weather elements, all traffic elements, etc., it is impossible to fully represent these in any finite dataset. Thus, a problem is evident: how can the system architect ensure appropriate generalization behaviour when the training set may not include any samples from a significant portion of the domain population? Is this even feasible? Our thought experiment clearly is as relevant as this. We will explore the issues as our thought experiment unfolds. The Thought Experiment: Cases Considered Assume training in our LEGO ® domain is done while leaving out one value of some attribute dimension from the training set creating a new network A -i where the i signifies the i-th dimension where one of its possible values is not represented at all in the training set. This would not be unlike training an autonomous car using a dataset where only sunny, daylight scenes are included. That 'left out' dimension i, with respect to visual appearance of the bricks, is one of: color (e.g., leave out one color), size (e.g., leave out all 2x2 bricks), shape (e.g., leave out all slopes), orientation (e.g., leave out images where the long axis of a brick is parallel to the conveyor belt), or pose (e.g., leave out all upsidedown poses). We could also consider more than one such 'left out' value or dimension. For each case, we will consider the resulting generalization error. Going back to Figure 1, the flowchart of the Backcasting process, we use the 'current status quo', namely the state-ofthe-art in computer vision, to 'create specific consequent', namely the network A -i , which will be assumed to satisfy all production specifications LEGO ® would require. We will consider a number of different sources of possible training biases and their impact on error for this network. For each, the overall error will be estimated and given as the sum of the error of the full network A (GA < 0.000018) and any new errors introduced as a result of the changes in the training set. As in Figure 1, one or more steps will be taken towards the 'logical antecedent', in other words, towards what must be true in order for network A -i to perform as assumed. Whether or not the logical antecedent is true or even realistic will determine our conclusions regarding generalization. Training Set Missing One Brick Color Suppose one color of the staple LEGO ® brick set possibilities (red, blue, green, white, black, yellow) is left out of the training dataset and that the resulting system A -c satisfies the empirical error stipulated above. This means 5 colors are sufficiently represented and thus are learned correctly. This places the state of our argument at the 'create specific consequent' stage of the Backcasting flowchart in Figure 1. We now proceed to the boxes in the figure labelled 1 through to the 'logical antecedent'. (These steps will be true for each of the cases described below and not further repeated in those descriptions). Although the color label set referred to above includes 141 colors, these 6 are the standard ones, and in any case, sufficient to demonstrate the point here 6 . If those 5 colors are learned and thus 'in the weights' of the network, then those weights might be invoked when the 'left out' color is seen. Color is complicated (see Koenderink 2010). The most common representation for images, such as in those datasets in common use, is that each pixel in an image is represented by R (red), G (green) and B (blue) values, each coded using 8 bits. In other words, for each of the the colors, 256 gradations are possible yielding a potential 16,777,216 colors. Luminance or intensity is not independently coded but is rather derived from these as an average of the three values. The values for each of R, B and G come from image sensor outputs, and the most common practice is for the sensor to employ a Bayer filter (a pattern of color filters with specific spatial distribution), and then to be further processed, including to de-mosaic the image. These color filters are designed to specific characteristics; B is a low pass filter, G is a band-pass filter and R is a high-pass filter, attempting to fully cover the range 400-700nm. Spatially, the Bayer filter is arranged to have 2 green filters for each red or blue, that is, each image pixel is the result of these 4 samples, 2 green, one red and one blue. The filter pass ranges overlap only by small amounts. The goal is to match the wavelength absorption characteristics of human retinal cones, which very roughly are: S (blue) 350 -475nm, peaking at 419nm; M (green) 430 -625nm, peaking at 531nm, S (red) 460 -650nm, peaking at 559nm. Rods, for completeness, respond from about 400 -590nm, peaking at 496nm. An interesting characteristic is that beyond 550nm, only the M and L systems respond. Koenderink points out that the distributions in the human eye are not ideal in how they cover the spectrum; an ideal system would more equally cover the full spectrum. This is what the Bayer filter attempts to accomplish. Of course, there are many color models and encoding schemes, but this is most common. Suppose the 'left out' brick color is yellow. As Koenderink points out, the spectra of common objects span most of the visible light spectrum (see his Fig. 5.18 that shows 6 different spectra of real 'yellow' objects; for each, most of the visible spectrum has some representation except blue). So, if one considers the RGB representation across all the bricks, most of the other colored bricks will have some yellow spectral content but this will never be independently represented. The portion of the spectrum we commonly see as yellow is quite narrow, centred around 580nm. In both human vision and using the Bayer filters, these wavelengths are not sampled by the blue component at all. In human vision, the absorption spectra of the L and M cones have good sensitivity to the yellow wavelengths (their peaks are at 531 and 559 respectively). However, using Bayer filters, most applications try to minimize overlap so at the yellow wavelengths, sensitivity is typically low, less than 50% of the peak. The question we ask is, if no yellow brick is part of the development of A -c , what is the resulting generalization error? Since color is 'in the weights' of this network, and all non-yellow objects likely exhibit some yellow in their reflectance spectra, we now ask if it is enough to enable classification of a yellow brick via mixing. The color humans perceive is a complex function of the spectrum of the illuminant (and ambient reflected spectra), the angle of incidence of the illuminant on the surface being observed, the albedo of the surface, the angle of the viewer with respect to the surface, and the surrounding colors of the surface point being observed, Most of this is captured by the well-known Bidirectional Reflectance Distribution Function (BDRF) (see Koenderink 2010 for more detail). In our brick binning task, much of this can be ignored. The illuminant, its angle to the conveyor belt, the camera and its angle to the conveyor belt, the surrounding surface are all constant. The variables remaining are the surface albedo and the angle between the camera's optical axis and the brick surfaces being imaged. It has long been accepted that colors can be formed by additive mixtures of other primary colours 7 . For our purposes, this means that even though some spectrum of color is 'in the weights' of the network, their combination cannot necessarily result in any other color if a primary color is missing. Color theory for mixtures tells us: • green is created by combining equal parts of blue and yellow; • black can be made with equal parts red, yellow, and blue, or blue and orange, red and green, or yellow and purple 8 ; • white can be created by mixing red, green and blue; alternatively, a yellow (say, 580nm) and a blue (420nm) will also give white. However, red, blue and yellow are primary colors and cannot be composed as a mixture of other colors. If a primary color is unseen during training, there can be no set of weights that would represent it as a combination of other colors. This would mean that if a primary color is unseen, it could not be classified. In a brick classification task such as ours, color is an important dimension because it divides the entire population into 6 groups; then within each group are the different block types. 7 Koenderink provides a history with some emphasis on Grassman's laws from 1853. Bouma in 1947 writes an interpretation of these as: "If we select three suitable spectral distributions (three kinds of light) we can reproduce each color completely by additive mixing of these three basic colors (also called primary colors). The desired result can only be attained by one particular proportion of the quantities of the primary colors". 8 The author of https://brickarchitect.com/color/ points out that the LEGO Black is not a true black, but rather a very dark gray, and LEGO ® White is actually a light orange-ish gray. The above are all additive mixing rules. Subtractive mixing should be considered as well because networks employ both positive and negative weights, and it might be that this is an alternate avenue for dealing with the color. The additive rules arise by using the RGB color model while the subtractive rules are within the CMYK color model, not often seen in neural network formulations. The three primary colors typically used in subtractive color mixing systems are cyan, magenta and yellow. Cyan is composed of equal amounts of green and light blue. Magenta is composed of equal amounts of red and blue. Yellow is primary and cannot be composed of other colors in both color models. In subtractive mixing, the absence of color is white and the presence of all three primary colors makes a neutral dark gray or black. Each of colors may be constructed as follows: • red is created by mixing only magenta and yellow • green is created by mixing only cyan and yellow • blue is created by mixing only cyan and magenta • black can be approximated by mixing cyan, magenta, and yellow, but pure black is nearly impossible to achieve. As Koenderink (2010) says, when describing his Fig. 5.18, the best yellow paints scatter all the wavelengths except the shorter ones. In fact, the spectra he shows for 'lemon skin', 'buttercup flower', 'yellow marigold' or 'yellow delicious apple' have significant strength throughout the red and green regions. In practice, a strong yellow, such as in a brick, could appear as the triple (r, g, b) in an RGB representation, where 0 ≤ r, b, g ≤ 1, and r, g >> b, b being close to 0. It would be reasonable to think that since no yellow brick is seen in training, that there would be no corresponding ability to classify it. With some generous assumptions about how the colors are represented in the weights and how they combine through the network, there may be a route to combinations that lead to most colors. After all, this would reflect the distributed representations underlying such networks (Hinton 1984): each entity is represented by a pattern of activity distributed over many computing elements, and each computing element is involved in representing many different entities. But color theory tells us that no combinations can yield yellow; it cannot be overcome using a learning strategy that never sees samples of these colors nor by distributed representation strategies. The only conclusion possible is that network A -c where c ∈ (red, blue, yellow) will not properly generalize, but that c ∈ (green, white, black) might. This would of course be unacceptable to LEGO ® . On the other hand, a dataset that is biased towards this latter color subset might actually exhibit performance metrics for test error that could appear promising; this would, however, be a false indication. An analysis of failure instances would reveal this problem. There are many other color spaces. Rasouli & Tsotsos (2017) review 20 different spaces and show how each leads to different characteristics for the detectability of objects. Above, we show only two of these. It might be that the correct choice of color space for particular training data sets leads to different generalization properties for learned systems. This requires further research but the methods described here might be helpful. Our hypothetical network A -c where c ∈ (red, blue, yellow) would thus exhibit the following error. Leaving out one of these colors, if we assume all LEGO ® bricks are made in all colors equally, means that 1/6th of all bricks (since 1/6th of all bricks cannot have their color correctly classified) are erroneously binned. Since the total number of bricks possible is 1,468,800. as enumerated in Section 2.2, a 1/6th error (244800/1468800 = 0.166667) overwhelms the manufacturing error of 0.000018. The overall error, as defined in Section 4.0, of G can be estimated as G A -c < 0.166685. For A -c where c ∈ (green, black, white), using generous training assumptions, we can assume no additional error so G A -c < 0.000018. Training Set Missing One Brick Size Let us name the learned system where one of the sizes is left out of training A -z , z ∈ (s1, s2, ....sk), the set of all block sizes, and that it satisfies the empirical error stipulated above. There are many sizes in the label set we are using and not all sizes apply to all brick types. The majority of the specialty bricks are of a unique size. Common pieces have several sizes. For example, in the Bricks 1xN type, there are 9 simple sizes, i.e., 1x1, 1x2, 1x3, 1x4, 1x6, 1x8, 1x10, 1x12, and 1x16, but also 40 more complex shapes with varying counts of studs, thus of differing physical size when compared to other bricks but without variation of size with respect to that particular type of brick. In other words, there seem to be at least two different dimensions along which size might be represented: stud count and brick volume. There may be additional ways as well; let's consider stud count only (in Figure 2, the bricks in panels a, b, c, and d have 4, 1, 2, and 3 studs respectively). The minimum stud count is 1 and the maximum is 54 in the label set referenced above. The accurate size distribution is tedious to enumerate; it will likely be as informative to assume that there are equal numbers of each stud count, that is, approximately 73,320/54 = 1358 using the full brick set, or 1133/54 = 21 using the production values for unique brick types cited above. Suppose one stud size is left out of the training set but that the resulting system A -z satisfies the empirical error stipulated above. How could the system generalize to that missing stud count? One might imagine that if a simple linear piece with 4 studs is left out of the training data, with generous assumptions, that the learned portions of the network for the similar shapes might jointly fire and fill in the gap. This would mean that there is some combination of network elements that form a 4-stud straight piece, as shown in Figure There are many similar questions given the variety of ways LEGO ® has found to make bricks with 4 studs. It seems that the composition with learned pieces might provide a partial answer, but not a complete solution. Certainly, any error measure would be increased even if we assume a partial solution. Our hypothetical network A -z would exhibit the following worst-case error. If all the bricks of a single stud measure are binned incorrectly, that is, 21 out of 1130 bricks, the error is 21/1130= 0.018584 which when summed to the overall production error gives a cumulative error of G A -z < 0.018602. Training Set Missing One Brick Orientation Suppose one orientation in the imaging plane (i.e., parallel with the conveyor belt -recall the imaging geometry described in Section 2.2) is left out of the training set and that the resulting system A -o satisfies the empirical error stipulated above. Simple data augmentation methods, such as spatial shifts or within the image plane rotations, are likely to permit generalization to one, or perhaps, more orientations being omitted from a training set (more on data augmentation methods below). The lack of good representation of all orientations is probably not an insurmountable problem; this would, however, also depend on whether or not there is a need for precision grasping if a robotic manipulator is used and this is not further considered here. Data augmentation is a relevant method for reducing orientation-in-the-imaging-plane sampling biases in training data given the restricted imaging geometry. The hypothetical network A -o exhibits an overall error that is not increased due to this bias, and thus we can assume that G A -o < 0.000018. Training Set Missing One Brick 3D Pose Suppose one 3D pose is left out and that the resulting system A -p satisfies the empirical error stipulated above. The LEGO ® brick domain is well-suited to an aspect graph representation and this is yet another advantage of choosing these bricks as our domain of interest. We could not easily enumerate all the characteristics of most other domains, certainly not of natural images. First, a brief overview of aspect graphs (Cutler 2003). An Aspect is the topological appearance of an object when seen from a specific viewpoint. Imagine a sphere where each point on the sphere represents the viewing direction formed by that point and the sphere's centre, as is shown in Figure 3. A Viewpoint Space Partition (VSP) is a partition of viewpoint space into maximal regions of constant aspect. An Event is a change in topological appearance as viewpoint changes. A constant aspect then is a contiguous region on the sphere that is bounded by changes in topological appearance, i.e., events. The full space partition would show a different region for each face. Consider the simple example of a cube. The separate regions of the space partition would be composed of regions where only one side is visible, only 2 sides are visible and only 3 sides are visible. There is no partition where 4 sides can be visible. There are 26 of these regions. An Aspect Graph is a graph with a node for every aspect and edges connecting adjacent aspects. The dual of a viewpoint space partition is the aspect graph. Aspect graphs can be made for convex polyhedra, non-convex polyhedra, general polyhedral scenes (same as the non-convex polyhedra case), and non-polyhedral objects (e.g., torus). In general, for convex objects, the size of the VSP and the aspect graph are of order O(n 3 ) while for non-convex objects, the VSP and aspect graph are of order O(n 9 ) under perspective projection (O(n 2 ) and O(n 6 ) under orthographic projection respectively, where n is the number of aspects) (Plantinga and Dyer 1990). The cube has fewer than this general number because it has 3 pairs of parallel planes that do not intersect and thus do not form a viewing possibility. Recognition algorithms that employ aspect graphs typically match a set of aspects to a possible reconstruction of a hypothesized object (see Rosenfeld 1987 for the first of these; Dickinson et al. 1992 for a nice development using object primitives). We will assume orthographic projection because we can engineer the imaging geometry to satisfy its properties. If one 3D pose is left out of the training set representation, for example, there are no images of any LEGO ® brick with its top surface facing the conveyor belt, then the following results. If one side is never seen, it affects all aspects in which it participates. For a cube, this would mean: the aspect of itself, the aspects with one neighbouring face (4), and the aspects with 2 neighbouring faces (4), for a total of 9 aspects. In other words, 9 of the 26 aspects of the brick need to be generalized using the remaining 17 learned aspects. In other words, 35% (9/26) of the possible constellation of aspects required for recognition would not be available. Recognition would fail if the observed viewpoint led to a set of visible aspects that overlapped these 9 aspects. For bricks more complex than a cube, this would differ. However, each possible missing pose does not necessarily represent a stable configuration for a brick lying on a conveyor belt. Of the brick types listed above, wedge-plates, tiles, bricks 1xN, slopes, plates 3xN/4xN, plates 2xN, plates 1xN, many would have only 2 stable configurations, right-side up and upside down with no possibility of a stable side-ways configuration. Further, there are a number of shapes with unique characteristics such as those in Figure 4a-c. The first could appear on its side or on its back end with the protruding elements acting as stabilizers, so it might be tilted towards the camera. The second could easily be stud-side down, on each vertical side, on the slant, but not likely stable on its bottom or backside. The third might have 5 stable sides. In other words, the number of possible cells of an aspect partition differs for each brick type. It is quite possible to enumerate all of these given we have the library of part labels; however, that enumeration would be tedious, and an approximation will suffice. We assume, as mentioned earlier, that each unique brick has an average of 3 stable poses, so for 1130 unique brick types there would be a total of 3390 stable brick appearances due to changing pose. Call these expected stable poses p1, p2, and p3 (perhaps right side up, upside down, and on the left side, to pick one possible set of examples). For our cube example, above, a missing face would impact 9 out of 26 aspects, or about a third of all aspects. In other words, the 'left out' pose in A -p must be one of the brick's stable poses, p1, p2, or p3, and not an arbitrary pose. What is the impact if one stable pose, say p1, of all brick types being left out of a training set? The aspect graph itself would be incorrect. Not only would one face (at least) be missing (or presumed flat) but its interactions with its neighboring faces would be incorrect. Recognition of that particular brick, if based on the learned visible aspects, would be impaired unless the particular viewpoint the camera sees is one from which only properly learned aspects are visible. If a third of the aspects are affected, then assuming all viewpoints are equiprobable, one third of all views of this brick would lead to erroneous recognition. It is worth pointing out that the blocks in Fig. 4c-f are examples of objects for which degenerate views are possible. The cross-section of the each is the same, only the length differs, so even though a block maybe in a stable pose, the imaging geometry would yield an ambiguous situation (Dickinson et al. 1999). Is it possible that the other aspects for this brick or the aspects of other bricks, could together combine to permit correct recognition? We could consider this on a case-by-case basis, but this would be quite tedious. Nevertheless, it might be, with generous assumptions about the capabilities of the learned network, that the learned elements corresponding to the bricks in Figure 4d-f could combine to provide what is needed to recognize the slope of intermediate size, shown in the previous set. But there is no corresponding brick combination for the first one in the previous set, nor for many other bricks; they are all quite unique. Thus, recognition of those bricks is assumed to fail if the proper constellation of aspects is not seen. It seems safe to think that the error would be larger than that of the ideal network which has already been pegged at G A < 0.000018. If no brick is seen with p2, for example, it means that one third, or 1130, of the possible brick appearances would have to be recognized as a result of generalization. We can make generous assumptions about the generalization capabilities of the network, specifically that it can successfully handle similar bricks such as the rooflike ones just depicted even if no training sample for a particular pose is present. However, a rough count in the brick library yields 132 unique pieces that have no similar ones from which a generalization can easily obtained, or in other words, 12% of the possible 1130 unique bricks. A different possibility would be some kind of data augmentation in training set preparation. No data augmentation could remedy missing poses because no such method inserts patches out of other images (the stud surface appears in other poses of course). Even a data augmentation that could consider geometry would not be able to fill in the missing samples unless prior knowledge of what all the invisible aspects could look like, or assumptions of surface continuity, is somehow applied in the data augmentation process (see also Logan et al. 2018, Ian et al. 2014). In any case, such an action has its own problems (Rosenfeld et al. 2018) and would not be a sensible strategy. Our hypothetical network A -p would have the following error, assuming the least error implied by the above arguments. Suppose that the generalization process is effective for the bricks for which there might be additive ways of deriving a brick (even if not straightforward), but for the 132 unique bricks they are all classified incorrectly, a 12% error as described. Thus, an estimate of overall test error is G A -p < 0.120018. This estimate is likely too small. Training Set Missing One Brick Shape Suppose one shape is left out and that the resulting system A -s satisfies the empirical error stipulated above. Shape is a more abstract notion here than the other dimensions considered. It is included because one might imagine a customer requesting an order of all sloped bricks, of all rectangular bricks, of all plates, etc. For example, the bricks in Figure 5a-d are all considered plates and their regularity is the thickness of the base. On the other hand, the bricks in Figure 5e-h are classified as slopes because they all contain a sloped surface. There are 106 slope brick sub-types. The question here is if all plates (or slopes, or bricks, or wedges, etc.) were left out of the training set, could the resulting network generalize so that they could be recognized sufficiently well? Let us consider some of the details of our assumed network A -s . These images contain no texture information useful to the task, thus we assume that none is learned. The imaging geometry, especially the lighting is such that shadows or other cues for shape from shading are not useful. We can also leave out color for the current purpose. It is wellknown that many learned networks represent receptive field tunings in early layers very similar to oriented Gabor filters in their early layers and this seems a good first level of processing for the brick images. One can easily imagine what a line drawing of each of the LEGO ® bricks might be; the bricks are textureless. It is an acceptable assumption then that processing then continues on such a representation. How can shape be inferred from a line drawing? There is a wealth of literature on how computers might interpret line drawings. LEGO ® brick shapes are mostly polyhedra and one of their shape characteristics (but not a complete characterization to be sure) is the set of labels of their lines and vertices. A labelling of an image is an assignment to each edge of the image of one of the symbols +, -, ⇢ and ⇠ (concavity or convexity), and similarly for each vertex one of the many types of vertices (Clowes 1971, Waltz 1971; several sources enumerate the set, but the exact cardinality is not important). Such a labelling is a reasonable assumption as a representation of shape sufficient to enable discrimination of one shape from another, although it is not difficult to show counterexamples. Kirousis & Papadimitriou (1988) examined the processing of images of straight lines on the plane that arise from polyhedral scenes. They asked, given an image, is there a scene of which the image is the projection? Such images are called realizable. One classical approach to the realizability problem is through a combinatorial necessary condition. A labelling is legal if at each node of the image there is only one of the legal patterns. A legal labelling is consistent with a realization of the image, if the way the edges are seen from the projection plane is the way indicated by the labelling. They provided a proof that it is NP-complete, given an image, to tell whether it has a legal labelling. This is true for opaque polyhedra, and is even true in the simpler case of trihedral scenes (no four planes share a point) without shadows and cracks. Although there are methods for labelling such a line drawing, the problem is that it is exponential to determine if the labelling corresponds to a real object. In other words, since any algorithm for extracting straight lines from images necessarily may have error, any error may signal an illegal labelling where there is none or a legal labelling where there is not one, so a step requiring the verification is needed. Once it is known that there is a legal labelling, there exist algorithms for matching labelled image to known objects that have known labellings. These results, importantly, are independent of algorithm or implementation; they apply to the problem as a whole. Kirousis & Papadimitriou (1988) also present an effective algorithm for the important special case of orthohedral scenes (all planes are normal to one of the three axes). It is tempting to think that this latter case applies to LEGO ® bricks; most are indeed approximately orthohedral (the studs pose the exception), but then again there are many pieces within the library that are strictly polyhedral or involve curved segments. For example, the class Plates 1xN includes the bricks of Figure 6a, the class Plates 2xN includes the bricks of Figure 6b. Slopes include the bricks of Figure 6c. The class Brick 1xN includes bricks such as that of Figure 4d. Even in decomposition these will contain elements that are non-polyhedral. The task of realizing a general, non-orthohedral scene, given its labelled image can be solved by employing linear programming, i.e., a polynomial time algorithm exists. So, the problem remains to find the legal labelling. It is known that the labeling of trihedral scenes is NP-complete as is the complexity of labeling origami scenes, that is, scenes constructed by assembling planar panels of negligible thickness (Parodi 1996, Sugihara 1982, Malik 1987). It is difficult to accept that a deep learning procedure can effectively learn the solution to an NP-Complete problem; it might, however, learn approximate solutions that are within some error bound -s for subsets of the full problem. We will not explore this route but suggest that this makes the generalization issue even more difficult to address. a b c d e Figure 6. Second set of example bricks for Section 4.5. Note that we still have not come to the generalization issue. Suppose no slopes are included in the training set but a customer wishes a box of all slopes in the LEGO ® catalog. This means that no instance of the configuration of lines and vertices seen in bricks such as that in Figure 6e have participated in any learning. It is highly unlikely that a network can construct a particular configuration out of learned elements that would suffice; after all, as shown, the problem in general is combinatorial and even generous assumptions about learned networks cannot defeat this fact. In order to quantify the expected error of the hypothetical network A -s , we can be optimistic and say that the error would not be greater than the error incurred by mis-classifying the smallest group of bricks in the catalog. In other words, if all slopes are erroneously classified because slopes were not in the training set, it would mean that the remaining 520 -106 = 414 types are correctly classified. The smallest such group in the catalog is that of plates with 22 instances. Then, if those 22 classes are not part of the training set, 22 x 6 (poses) x 6 (colors) x 72 (orientations) = 9504 images are mis-classified out of the possible 1,468,800 images, or 0.647%. Thus, an estimate of overall test error is G A -s < 0.006488. It should be clear that this is a very optimistic estimate. Interim Summary Having looked at 5 different cases of selection bias in training sets, we summarize the above analysis in Table 2. The hypothetical networks G A -c , G A -z , G A -p and G A -s all have orders of magnitude too great an error given the production standards. As should be clear, if the training set is biased with respect to any characteristic except orientation in the imaging plane or non-primary colors, the resulting network is unlikely to generalize so that the required performance standard can be met. The particular values for generalization error may invite argument to be certain. However, there are two indisputable facts that emerge. First, the error for these four networks is far beyond the acceptable manufacturing values; it's not small nor negligible. Second, there are at least a few training biases that cannot be generalized away. LEGO ® bricks are a simple domain, one that can be completely characterized. Imagine how such a thought experiment might be impacted by a more complex domain. Training Set Bias Learned Network Generalization Error Unbiased A G A < 0.000018 Color Biased A -c , c ∈ (red, blue, yellow) A -c , c ∈ (green, black, white) Table 2. Summary of training bias analyses. General Discussion The previous sections attempt to show that generalization in learned networks may not be what it seems. We will first address potential issues regarding our assumptions, approach, and analysis methods. Then, we relate our thought experiment to real domains, specifically, autonomous driving, in order to show its relevance. Methodology and Assumptions We have sought to answer the question If a network is trained with a dataset that misses particular values of some defining domain attribute, can it generalize to the full domain from which that training dataset was extracted while maintaining its performance accuracy? This is different than what many others have considered regarding the generalization behavior of learned networks. Rather than conduct a deep mathematical analysis or extensive empirical analysis neither of which has provided an answer, we tried a different approach, a Thought Experiment. The strategy we used is Proof by Contradiction via Backcasting, both well-known and widely used methods. Backcasting involves establishing the description of a very definite and very specific future situation. This specific situation involved the assumption that our question was in fact answered in the positive, that learned networks effectively generalize even when training biases are present and that they effectively bridge such biases. We then defined a real, yet simple, domain on which we would test this future situation, that of LEGO® bricks. This seems to be a novel twist to the overall problem because it ties real engineering performance bounds on the network's performance, something not often seen in the previous literature. Regarding the details, we made a number of assumptions involving the LEGO ® domain and since all were made to represent the lower magnitudes of any counts, the overall set of such assumptions can be considered generous towards a negative result to our thought experiment. We assumed that the learned systems A and A -i work perfectly and that training error equals zero. We also assumed that regardless of the training regimen or network configuration, each network includes all the classes that would be required to compose any user query (specifically, classes for each brick sub-type, each pose type, each orientation, each color). It is certainly true that in vision, no current training set fully captures all dimensions of the representation of visual information. So, this has direct relevance to computer vision systems, and likely beyond to other domains. Our thought experiment unequivocally points to the conclusion that any generalization behavior is completely dependent on the acceptable system error and on the nature of the domain data not represented in the training set. Finally, it would be easy to argue that for the LEGO ® domain, the space of possible images is small enough so that a well-designed neural network would perform very well, or even that there is now enough computer power cheaply available to enable a brute-force search solution. Both these points are true and in practice perhaps this is exactly how LEGO ® might develop a practical system for this purpose. It is just as valid to think that there could be many engineering workarounds or manipulations for problems with how object pose or orientation might appear on a conveyor belt. Examples abound on modern manufacturing lines (tracking guides, slots, use of gravity, friction or puffs of air for positioning, etc.). But this was not the point of our exercise. We had no desire to present a realizable design for a production system. The point was to select a real and complex task, whose scale is manageable for complete representation in order to test the limits of generalization in the face of training biases and performance requirements. Cases Outside the Brick Binning Task The argument of Section 4.0 assumes that only one value of some dimension is missing. What if the training bias encompasses some combination of attributes? For example, for some bricks a particular color may be missing while for others some orientation is missing. Or perhaps there are no red bricks in the training set and some 3D poses are also missing. Worse still, suppose there are no yellow bricks and no slopes in the training set. It is straightforward to devise many more combinations. For some the impact might be small while for others large. It seems apparent that there are many combinations where no apparent satisfactory behaviour would be observed in terms of LEGO ® 's production standards. The LEGO ® brick application was discussed with a few assumptions to simplify the hypothetical system. In a different real world setting when arbitrary kinds of objects might be involved in normal scenes, these assumptions will not be valid. First of all, there would not be a single well-isolated object in a scene; there would be many, likely with interactions among them. Changes in viewpoint of the observer or in general, the overall geometry of the image formation process (Faugeras 1993), shadows or highlights and how they change with illumination direction and surface properties (Horn 1986), accidental alignments of object features (Dickinson et al. 1999), and more will need to be considered. A detailed analysis of each of these complexities is beyond our scope; however, just for illustrative purposes, we consider the viewpoint change case for a set of assembled bricks, an object more complex than a single brick but still not quite as complex as an arbitrary real-world object. Suppose we are presented with a construction of 'standard' LEGO ® bricks (all edges are orthogonal to the others), say on a LEGO ® plate of 10 x 24 studs and the plate is presented with some random 3D pose to the observer. Let us ignore for the moment the impact of the studs on the brick surfaces. A simple enough task for an observer might be simply count the number of bricks used in the construction. For our hypothetical scenario above, suppose a customer wanted 1000 random constructions of this form each with exactly K bricks. Assuming a random brick construction engine, these items might pass under the same camera on a conveyor belt and a system to count the bricks would then decide which random construction to give to this particular customer. As mentioned earlier, Kirousis & Papadimitriou (1988) provide an effective algorithm for labelling all the visible lines and vertices of orthohedral scenes so the recognition of each visible brick for counting purposes (only) is assumed solved. However, it is not difficult to see that if a brick is occluded by another, some or all of its lines and vertices would not be labelled and thus it could not be counted unless the viewpoint changes. The incidence of occlusions depends on the stable poses of the brick construction. One, perhaps the most, stable pose is for the supporting plate to be flat on the conveyor belt. Occlusions would still occur if the construction included bricks that are partially supported and this overhang perhaps above other bricks. Two other stable positions can be formed where one of the the long ends (there are 2 long ends) of the plate is on the belt but the overall construction is tipped over with some other contact point and the edge forming a stable pose. Here, occlusions would be frequent. As in the case of the previously described network A -p , where one 3D pose is left out of the training set, no amount of training could fill in the structure that cannot be observed. Active observation has been shown to be critical for many real-world visual tasks (Andreopoulos & Tsotsos 2013, Bajcsy et al. 2018. It is not difficult to imagine a wide variety of tasks outside the ones mentioned where similar problems would arise. It is important to note that for many, solutions could be easily engineered to avoid the problems; but that was not the point of our analysis. Our point was to ask questions about generalization and in that respect, we conclude that great care needs to be applied when making assumptions about how a trained network can or cannot generalize. Real-World Implications It is important to address why our analysis is representative of how generalization might appear in other, real-world, visual domains and tasks. The first point to make is that it is clear that the creation of a complete image dataset is an impossibility regardless of domain and thus any real training set will include gaps if not also outright biases. Generalization is necessary. Suppose that by some crazy quirk, the training set for an autonomous driving scenario has no images where a traffic light is yellow, only green or red and that there are no yellow cars, no yellow yield signs, nothing yellow at all. As we showed above, the system will not generalize in order to ever recognize yellow. The point here is that without a careful analysis of exactly what is and is not in the training set, one can never be certain of generalization behaviour in a safety-critical situation. Rasouli & Tsotsos (2019) survey in a very broad fashion the issues relevant for how autonomous vehicles might interact with pedestrians. This is, obviously, only one component of the full autonomous driving task but will suffice to make our point here. They show that there are at least 22 major factors, and among these, at least 12 require visual perception and decision-making. Among these 12 are included pedestrian gender, age, road structure, traffic volume, road signal systems, and so on. The variety of appearances of road signal systems alone is daunting. For our thought experiment approach, one might think of a dataset that, due to no fault of its creators, might not represent young children as pedestrians very well, or might not represent all skin tones well (a problem well-noted for the face recognition task). Thus, it is easy to see how one might proceed with the thought experiment in the autonomous driving setting and reach much the same conclusions. The number of accidents involving autonomously driven automobiles is increasing and here we give a couple of examples to illustrate how important generalization should be. Consider the accident in which a person walking a bike was killed 9 . The system creators may have indeed had a category for pedestrian and a category for bicycle (either stationary or being ridden) but may not have a category for person-walking-a-bike (across the street). An NTSB 2018 study concluded that the self-driving system first registered radar and LIDAR observations of the pedestrian about 6 seconds before impact. The vehicle speed was 43 mph. As the vehicle and pedestrian paths converged, the system classified the pedestrian as unknown, then as a vehicle, and then as a bicycle with varying path projections. At 1.3 seconds before impact, the self-driving system initiated emergency braking in order to avoid a collision. First, hypothetically there might be the possibility that person-walking-a-bike is never a targeted category for the detection system such that it has difficulty appropriately registering the situation as an unknown object, as a vehicle, and then as a bicycle. Secondly, it is unclear that the detection NN could compose a person-walking-a-bike out of detections of a person and simultaneously a more-or-less co-located and co-travelling bike without being explicitly engineered to do so. Consider a second example. A video of an autonomous vehicle crashing directly into an overturned box truck in Taiwan went viral, partly because it showcases what many assume to be a failure of the vehicle's autopilot feature 10 . The clip makes it apparent the truck should be an easy obstacle to detect and avoid if one is paying attention to the road. It is hard to miss a giant white blockade in the middle of the highway. The autopilot nevertheless presumably failed to detect the truck, the driver was forced to put on the brakes himself, but it was already too late to avoid the collision. There was also a sign 100 metres before the overturned truck alerting any driver to the crash, which means that both driver and autopilot system ignored that, too. Was this an error of generalization? Perhaps. The system could not have been trained on all possible images it might see. It would depend on its ability to generalize across such training gaps. An overturned white truck in the middle of a path should have been an easy one for a human visual system if it were part of the current field of view (Section 4.4 showed how a missing viewpoint impacts network performance; this could be such a case). It might also be the case that this was not a simple matter of pattern recognition; rather, a more complex kind of generalization might be at play, one that requires reasoning about consequences of acting within the observed environment. There is an important reminder from such tragedy: neural networks and their underlying theory provide no guarantee that a given network will be able to cover all the relevant dimensions or categories or compositions unless they are included and tested accordingly. Human perception and cognition do not depend solely on classification of images. Carroll (1993) gives scores of human cognitive abilities as examples. Humans see but also reason and act (see Bajcsy et al. 2018). It may be that a new unification of classical methods in computer vision and AI with current machine learning methods will yield the best of both. Limits on Generalization It should be clear that no neural network can make a provably undecidable problem decidable, nor a provably intractable problem tractable (such as the NP-Complete problems described in Section 4.5). For such problems, the only recourse is approximation. The main question then is how far such an approximation would be from the production requirements of some particular application, such as our hypothetical brick binning task. For example, in the missing pose case of Section 4.4, it would not be unexpected that a full 3D brick might be classified even if the network was not trained on one possible pose; an interpolation would be likely that fills in any gaps. But how good is that interpolation in practice. The same is true for the missing color case of Section 4.1; it is certain that some color mixture might be output by the network, but how close is it to the required one? Modern applications employing learned networks for some domains approach high-90% accuracy levels, quite an accomplishment to be sure. However, our brick binning task needs several of orders of magnitude better performance. Many applications involving human life and safety require even lower error bounds. In other applications, such as face recognition for photo-sharing, or casual language translation, errors have virtually no associated costs (from a life and safety perspective), so our analysis may seem not so relevant. Further, the proofs in Kratsios & Bilokopytov (2020) and others make it appear as if the approximation error can be arbitrarily small (but non-zero). However it seems a great challenge to approach this theoretical limit given how the reported empirical errors of the latest neural networks are many orders of magnitude higher than those for our brick binning task. These are empirical questions that cannot be answered here but are certainly important for future study. The Problems with Data Augmentation As has been mentioned a few times so far, the set of techniques collectively known as data augmentation, a data-space solution to the problem of limited data, are commonly used to supplement the data assembled for training sets in learned systems (for reviews see van Dyk & Meng, 2001, Shorten & Khoshgoftaar, 2019. Such techniques enhance the size and quality of training datasets leading to better systems. This is done under the assumption that more information can be extracted from the original dataset through augmentations. These augmentations artificially inflate the training dataset size by either data warping or oversampling. Many other strategies for increasing generalization performance focus on the model's architecture itself leading to progressively more complex architectures (Shorten & Khoshgoftaar, 2019). Other specific functions for this problem include functional dropout regularization, batch normalization, transfer learning, and pretraining (e.g., Kukačka et al., 2017, Hernández-García & König 2018. This paper focuses on visual data; the image augmentation algorithms include geometric transformations, color space augmentations, kernel filters, mixing images, random erasing, feature space augmentation, adversarial training, generative adversarial networks, neural style transfer, and meta-learning (Shorten & Khoshgoftaar, 2019). The augmented data is intended to represent a more complete sampling of possible data points, thus decreasing the distance between the training and validation set, as well as test sets. All of these have proved themselves empirically to some degree but they all feature 3 issues. First, these are not principled approaches with respect to the actual data sampling deficiencies. Second, the data augmentation methods have secondary effects which seem ignored but which are potentially detrimental. Finally, they are not targeted at a domain's performance specification, they simply seek to improve generalization in some neutral manner. Consider the first of these issues. Any data set that purports to adequately represent some domain in order for a learning system to extract the relevant statistical regularities important for that domain, must actually include those features in a statistically discoverable manner. All of the methods increase the size of the training data via manipulations of the existing data in the belief that the enhanced data set better samples the domain. It is puzzling that these are not based on an analysis of the data set in order to discover exactly what might be missing, that is, what features are not statistically well represented. Consider next the issue of secondary effects. The augmented image set may help generalization, but the added samples have the potential secondary effect of changing the balance of representation for other features. Take a simple example. In order to increase rotation invariance as a performance criterion, an existing image set might be augmented by rotated versions of those images (see Logan et al. 2018, where this is also discussed). The artificially rotated images however change the distribution of lighting source directions across the full set of images. This may not be relevant for many applications, and of course humans have little difficulty with this in general (but are not immune to it; e.g., 'hollow face illusion', Hill & Bruce 1993). But for those applications where the learned system is part of an agent that needs to make inferences about lighting direction (to interpret shadows or depth cues from surface shading, etc.), this has the potential of improving performance in one variable while damaging that of another as a secondary effect. The same would be true for augmentation via symmetry manipulations; again the imaging geometry changes. Another manipulation involves simple spatial shifts in order to enhance position invariance or image padding to ameliorate boundary effects. This changes the context within which target objects are found and as seen in Rosenfeld et al. (2018), context is very important and can have quite detrimental impact on accuracy of classification. Learning methods learn not only the required target objects but also the context within which they are found. Much more on such effects appears in Shorten & Khoshgoftaar (2019). These secondary effects will have a ripple effect across the full set of features a domain requires. Each manipulation although targeted to some specific feature, necessarily changes sampling of other feature dimensions. Without targeting direct knowledge of the training set sampling deficiencies, the third issue mentioned above, data augmentation may improve global performance in an indirect manner and not necessarily performance along a particular feature dimension of importance. We should also point out that these image manipulation methods rarely go out of the 2D image plane; objects in the real world are three-dimensional and none of these methods can possibly augment in the third dimension let alone how 3D imaging geometry (light sources, camera viewpoint and settings, object position relative to illumination and camera, ambient or reflected illumination, surface properties) affect appearance (although see Ian et al. 2014 where the assumptions of sufficient view data and object surface continuity seems to permit new views, but not with accompanying illumination changes). Bias in Datasets In Section 1.2, the experimental biases described by Sackett (1979) were very briefly discussed, ending with the suggestion that these, with suitable domain translations, might be as valid for computer vision systems (as well as AI in general) as for medical data. Certainly the potential for bias is very real. Consider the number of possible training sets in the visual domain. Recall the estimate of number of discernable images presented earlier, 1.5 x 10 25 . For the sake of argument, suppose that the number of samples used for training is 10 7 (the scale ImageNet represents as mentioned earlier). How many subsets of 10 7 images are possible? This is given by , not so easy to evaluate, but certainly impossibly large to be certain which of these possible training sets are free of bias. Although it might be easy to search in a brute-force manner for a training set that yields incrementally better performance numbers than others, this is not the point. Bias in that training set remains unaddressed. Debate over Pavlidis' estimate will not really lead to a better result because even if that estimate is reduced to 10 10 , the number of possible subsets of a billion elements remains insanely large 11 . Even so, many shrug off such combinatorial arguments as not relevant. After all, humans are able to understand any image they see on the internet and many feel that machines should also have this ability. Such a casual conclusion arises from a lack of knowledge; this ignores the fact that humans do this at different speeds and with different performance levels (see Carroll (1993) for a comprehensive presentation of human visuospatial abilities; see Itti, Rees & Tsotsos (2005) for an encyclopedic treatment of the breadth of characteristics seen in human and primate attentional behavior, among others). Reaching such a casual conclusion is entirely unjustified and if it forms a pillar of the empirical method for the field, there is an important need for re-evaluation. Certainly, the phrase 'sees like humans' which is very frequently attached to descriptions of modern computer vision research needs to be re-considered. This is especially true if the computer system outperforms humans. To see like a human, from an external observer's point of view, should mean that the accuracy is similar, that the time course to a decision is similar, that the number and kinds of errors made are similar, and that any additional external behavior (eye movements for example) are similar. In general, these are not reported, and it is unlikely these are true for modern systems. It is a great goal to build a machine that can 'see like humans', but one must be clear about what this actually entails. By eye alone, humans may not match engineering performance specifications such as those described here. It seems intractable and infeasible to attempt an empirical examination of all potential sources and impacts of bias when developing a training and test set of data. However, as we have shown, a procedure like our thought experiment might be a way to begin addressing the issue. Tied to a domain-specific performance target, it seems straightforward to extend the reasoning described here in order to determine the impact of many types of bias. This might be timeconsuming and tedious, but it is tractable. Rasouli & Tsotsos (2019) have shown that it is possible to develop a substantial (not necessarily complete but very useful nonetheless) set of factors relevant to how drivers deal with pedestrians. A similar analysis could be done for other domains. It may be useful to require that the creation of any training or test set be accompanied by a Bias Breakdown, that is, an analysis of the spectrum of possible biases that is explicit regarding which biases are relevant and which are not, which can be dealt with by good sample choices and which can not, and what any impacts on final system performance might be due to the remaining biases. This Bias Breakdown can be organized easily. One suggestion for a template could be the following. For each source of bias -the 56 Sackett describes might be too much, but it's a good starting point -one might complete this Estimated impact of non-evaluated biases In our thought experiment, as already described, we focussed on Selection Bias, and within Sackett's sub-classes, missing data bias fits best. Completing this single dimension of the -no samples of multiple dimensions likely will increase error -low sample sizes of a single dimension may lead to unpredictably less error -low sample sizes of multiple dimensions likely to increase error above that for lowest error on single dimension with zero samples This should be considered a starting point and experience will determine how such a Bias Breakdown may evolve. Note that it is important to know for each application what the acceptable performance requirements might be and to to tie the goal of generalization to it. For safety-critical applications, the need for a clear, as complete as feasible, documentation of biases in training data and their impacts (or biases in any other aspect of system development, similar to the manner Sackett describes bias across the full research enterprise) cannot be over-emphasized. Conclusions This thought experiment was motivated by the desire to understand the nature of generalization in learned computer vision systems. At some basic level, this understanding should not depend on the exact characteristics of the system architecture or learning method. We sought to understand generalization at a more fundamental level. Specifically, we considered the impact of Selection Bias on generalization, because in vision, a training set is a very tiny subset of the full population of all possible images. We employed a well-understood methodology in order to challenge the limits of generalization, Reductio ad absurdum (proof via contradiction) via Backcasting. Our analysis indeed found contradictions. These were mostly due to physics, to geometry or to computational intractability for which no amount of cleverness in an algorithm can compensate. Sampling bias may be hidden and silent. It is hidden because very few research efforts verify that a domain is appropriately sampled to create the training set, and silent because the errors it causes are seen only if appropriate test cases are used (even though many have tried to raise the point, e.g., Akhtar & Mian 2018). Any learning system is agnostic as to what is missing when trained, whether it be values of attributes or whole attribute dimensions. One needs to be methodical and principled in dataset construction, statistically valid regarding sampling with respect to all the variations of the data, data augmentation and generalization expectations. Our analysis also found that generalization errors would exceed, by orders of magnitude, the engineering requirements of our toy (yet not so 'toy') domain. Our thought experiment points to three conclusions: first, that generalization behavior is dependent on how sufficiently the particular dimensions of the domain are represented during training; second, that the utility of any generalization is completely dependent on the acceptable system error; and third, that specific visual features of objects, such as pose orientations out of the imaging plane or colours, may not be recoverable if not explicitly represented sufficiently in a training set. It may be that more principled design of not only training sets but of the whole empirical protocol for learned systems is necessary. Current learned systems and their underlying theory provide no guarantee that a given system will be able to cover all the relevant dimensions, categories or compositions unless they are included and tested accordingly. It may be that a new unification of classical methods in computer vision and AI with current machine learning methods, with each paying more attention to the variability caused by object pose, viewpoint, and more generally the full imaging and lighting geometry, will yield the best of both. Whereas empirical confirmation may be the best method for confirming generalization effectiveness, it is a time and resource expensive activity. Our Thought Experiment Probe approach, coupled with the resulting Bias Breakdown, is far less costly and can be very informative as a first step towards understanding the impact of biases. Further, it is can be important supplementary documentation component for training data intended for systems with critical performance requirements.
2021-12-12T16:59:46.275Z
2021-05-20T00:00:00.000
{ "year": 2021, "sha1": "cd961b286a0d148c7711352a3894bad1c02839e4", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-1117982/latest.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "8246ace252fc8bac37f3359298617c8f2b162486", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [] }
239467765
pes2o/s2orc
v3-fos-license
Obesity and Bone Loss at Menopause: The Role of Sclerostin Background. Peripheral fat tissue is known to positively influence bone health. However, evidence exists that the risk of non-vertebral fractures can be increased in postmenopausal women with obesity as compared to healthy controls. The role of sclerostin, the SOST gene protein product, and body composition in this condition is unknown. Methods. We studied 28 severely obese premenopausal (age, 44.7 ± 3.9 years; BMI, 46.0 ± 4.2 kg/m2) and 28 BMI-matched post-menopausal women (age, 55.5 ± 3.8 years; BMI, 46.1 ± 4.8 kg/m2) thorough analysis of bone density (BMD) and body composition by dual X-ray absorptiometry (DXA), bone turnover markers, sclerostin serum concentration, glucose metabolism, and a panel of hormones relating to bone health. Results. Postmenopausal women harbored increased levels of the bone turnover markers CTX and NTX, while sclerostin levels were non-significantly higher as compared to premenopausal women. There were no differences in somatotroph, thyroid and adrenal hormone across menopause. Values of lumbar spine BMD were comparable between groups. By contrast, menopause was associated with lower BMD values at the hip (p < 0.001), femoral neck (p < 0.0001), and total skeleton (p < 0.005). In multivariate regression analysis, sclerostin was the strongest predictor of lumbar spine BMD (p < 0.01), while menopausal status significantly predicted BMD at total hip (p < 0.01), femoral neck (p < 0.001) and total body (p < 0.05). Finally, lean body mass emerged as the strongest predictor of total body BMD (p < 0.01). Conclusions. Our findings suggest a protective effect of obesity on lumbar spine and total body BMD at menopause possibly through mechanisms relating to lean body mass. Given the mild difference in sclerostin levels between pre- and postmenopausal women, its potential actions in obesity require further investigation. Introduction Despite being acknowledged as beneficial to bone mineral density (BMD) due to the mechanical loading effect of weight excess [1], obesity is emerging as a potential detrimental factor for bone health, particularly appendicular bones [2]. Studies in adolescents and adults pinpointed the negative effects of body fat excess on bone strength [3][4][5] and cortical rearrangement through insulin resistance [6]. In single-center analysis, osteoporosis was associated with obesity in one out of three women [7], while prospective studies found that nearly one out of four postmenopausal women with fractures presented with obesity, with obesity acting as the dominant risk factor for ankle and upper leg fractures [8]. In included: menarche > 16 years; previous hysterectomy, ovarian surgery, sex hormone replacement treatment, menopause < 45 years; osteoporosis treatment; previous diagnosis of type 1 diabetes mellitus (T1DM) and T2DM, autoimmune disorders affecting bone metabolism; chronic steroid, heparin or anti-convulsant therapy; evidence of spontaneous fractures, osteogenesis imperfecta, family history of severe osteoporosis malnutrition, malabsorption, chronic liver or kidney disease; less than one alcoholic drink reported per day and smoking habit. Each participant was admitted to the study as part of a regular workup of obesity and its complications, and enrolled after signing an informed consent. The study was conducted in accordance with the Declaration of Helsinki, and the protocol was approved by the Ethics Committee of Istituto Auxologico Italiano. Table 1. Anthropometric and biochemical parameters measured in the study groups. Significance was calculated by two-tailed unpaired Student's t-test. For significance: * p < 0.05; ** p < 0.01; *** p < 0.001. Bone Mineral Densitometry and Body Composition Analysis BMD measurements of the lumbar spine (L1-L4) and proximal femur were undertaken with the Prodigy densitometer (Lunar, Madison, WI, USA). Body composition was measured as derivative values of lean body weight and total body percentage fat. A positioning device was used to facilitate the reproducible measurement of the proximal femur. Quality control by daily measurement of an anthropomorphic spine phantom at each site, calibration with a spine phantom to provide cross-site and cross-time calibrations, and a site-level review of all participant scans for specified criteria. Bone scans were acceptable if there was no radiological interference from truncal adiposity, arm or leg did not overlap, no body parts were outside the scan field or positioning problems due to adiposity, no motion. In the case of the proximal femur, all femur subregions had to be valid in order for data to be retained for any of the subregions. Because the three BMD measures correlated highly with each other, a value of total BMD was incorporated in the analysis by averaging all sites analyzed. Statistical Analysis All data are expressed as mean ± SD. Data were tested for normality of distribution by the Shapiro-Wilk test and log-transformed when needed, to correct for skewness. Differences between pre-and postmenopausal subjects were calculated by two-tailed unpaired t test. Correlations analyses were calculated with the Pearson's coefficient. Inspection of the distributions of the variables indicated that the two groups overlapped. Therefore, to make the correlation analysis numerically more consistent, Pearson correlations were performed on the total 56 subjects to assess relationships among variables. The general linear model and analysis of covariance (ANCOVA) were used to evaluate the interaction between variables after statistically controlling for the effects of menopause; effect sizes and interactions were computed between variables and the covariate. Non-collinear independent predictors were included in a stepwise multiple regression model, as described in the results section. For significance, p < 0.05 was considered of statistical value. Analyses were performed with the SPSS 21.0 (SPSS, Inc., Chicago, IL, USA). Table 1 illustrates the large set of bone/adipose tissue markers analyzed in this study. With the exception of the expected differences in menopause-related hormones, menopause did not significantly alter the hormone profiles investigated herein, while adiponectin levels were higher after menopause possibly due to an age-related effect. In terms of glucose metabolism, premenopausal and postmenopausal women with obesity exhibited comparable rates of IFG (53.5% vs. 35.7%), IGT (35.7% vs. 32.1%) and T2DM (25% for both). Parameters Premenopausal Women (N = 28) Postmenopausal Women (N = 28) DXA scanning revealed lower lean mass and higher fat mass in postmenopausal as compared to premenopausal women. Lumbar spine BMD was similar between groups, while lower BMD values at the total hip (−0.138 g/cm 2 ), femoral neck (−0.147 g/cm 2 ) and total skeleton (0.210 g/cm 2 ) were documented in postmenopausal women compared to premenopausal ones (Table 3). BMD values suggestive of osteoporosis were only observed at the lumbar spine in one premenopausal woman. Table 3. Dual X-ray absorptiometry (DXA) parameters measured in the study groups. Significance was calculated by two-tailed unpaired Student's t-test. For significance: * p < 0.05; *** p < 0.001. Postmenopausal Women (N = 28) Lumbar spine BMD (g/cm 2 ) In correlation analysis (Table 4, Figure 1), sclerostin was associated with NTX and testosterone levels, as well as with lumbar spine BMD and skeletal BMC, while an expected inverse association related estradiol with NTX (r = −373, p < 0.01) and CTX levels (r = −290, p < 0.05). Lean mass clearly elicited a protective effect on BMD at the femoral neck (r = 0.301, p < 0.05), total hip (r = 0.290, p < 0.05) as well as total body (r = 0.499, p < 0.001). ANCOVA showed that a poor interaction between lean mass and menopause on total BMD (F = 0.6, not significant). As shown in Table 5, a multivariable model was built to test the role of hormone and adiposity markers on sclerostin levels and bone health. Sclerostin emerged as the strongest predictor of BMD at the lumbar spine. In turn, testosterone levels and menopause explained about 40% in the variability of sclerostin levels. As expected, menopause As shown in Table 5, a multivariable model was built to test the role of hormone and adiposity markers on sclerostin levels and bone health. Sclerostin emerged as the strongest predictor of BMD at the lumbar spine. In turn, testosterone levels and menopause explained about 40% in the variability of sclerostin levels. As expected, menopause significantly predicted BMD at the femoral neck, total hip and total body. Similar correlations were obtained when estradiol levels replaced as a continuous variable the dichotomic menopausal status. Finally, lean body mass emerged as the strongest predictor of total BMD. Table 5. Multivariable regression analysis in merged study groups. Results are provided for variables in the regression equation, adjusted R 2 values for significant predictors, standardized coefficients (β) and p values, with significant associations shown in bold character. For menopause: 0 = no, 1 = yes. For significance: *, p < 0.05; **, p < 0.01, ***, p < 0.001. We then assessed the determinants of circulating sclerostin in our obese cohort. Figure 2 presents η 2 values that explain how much (the percentage) each variable contributed to the total variability in sclerostin levels. Testosterone levels and menopause explained the highest impact accounting together for 70% of circulating sclerostin variability, while a modest influence was observed for HOMA-IR and PTH concentrations. We then assessed the determinants of circulating sclerostin in our obese cohort. Figure 2 presents η 2 values that explain how much (the percentage) each variable contributed to the total variability in sclerostin levels. Testosterone levels and menopause explained the highest impact accounting together for 70% of circulating sclerostin variability, while a modest influence was observed for HOMA-IR and PTH concentrations. Discussion This comprehensive study conducted in severely obese women distinguished by menopause provides evidence that obesity does not associate with (early) postmenopausal bone loss at the lumbar spine, with sclerostin acting as the strongest protective determinant at this site. By contrast, menopause associates with unfavorable changes in Discussion This comprehensive study conducted in severely obese women distinguished by menopause provides evidence that obesity does not associate with (early) postmenopausal bone loss at the lumbar spine, with sclerostin acting as the strongest protective determinant at this site. By contrast, menopause associates with unfavorable changes in femoral BMD, with the protective effect of lean mass being predominant at this site. Site-specific relations between obesity and bone health hence provide evidence of peculiar biochemical and anatomical determinants for such effects. Menopause causes bone loss in some women more rapidly than in others. High body weight and BMI putatively act on bone composition and fracture risk protectively, with rates of spine and hip bone loss reported to be 35-55% lower in postmenopausal women having their body weight in the top tertile than those at the lowest tertile [40]. Potential explanatory mechanisms reside in extraovarian contribution of estradiol from aromatase activity in fat tissue [41] and mechanical loading effect caused by weight excess, which stimulates proliferation and differentiation of osteoblasts and osteocytes through the Wnt/β-catenin pathway [42,43]. However, considering that the degree of physical activity is less consistent in obese women, as it generally is in the postmenopausal state, it should be considered that obese women may be more exposed to peripheral fractures, given the prominent weight-bearing activity of the spine. Increasing credit is given to the suggestion that adiposity actually impairs bone health due to direct and indirect mechanisms relating to decreased ostoblastogenesis and increased adipogenesis [44]; increased osteoclast activity through proinflammatory cytokines and RANKL/RANK/OPG pathway [45], which regulates osteoclast formation, activation and survival in normal bone modeling and in a variety of pathologic conditions characterized by increased bone turnover [46]; degenerative and inflammatory disorders of the musculo-skeletal system [47]; excess in fat intake interfering with intestinal calcium absorption [48]; altered vitamin D and PTH status [15]. Furthermore, fat deposition in vertebral bone marrow influences bone mass and fracture risk [49]. In pre-and postmenopausal cohorts, fat mass has been found inversely associated with BMD and correlated with non-spine fractures, when body weight is kept constant [50]. Also, obesity increased the incidence of ankle and upper leg fractures in postmenopausal women [8]. Finally, sarcopenic obesity with a predominant visceral phenotype has been found associated with a greater fracture risk, in addition to promote proinflammatory and dysmetabolic changes [51]. In this comprehensive study, we examined: (1) the overall effect of early menopause on bone density in relation to metabolic and hormonal profiles linked to obesity and bone health; (2) the involvement of sclerostin in specific-site mineral content; and (3) the role of body composition on bone health across menopause. Bone density and metabolism: In perimenopausal women, BMD of the radius, femoral neck, and spine declines progressively [40]. Parallel results have been underscored in women spanning a wide age range, where postmenopausal bone loss was found to be significant, both at the lumbar spine and hip [52] but more so at the Ward's triangle [53]. The Study of Women's Health Across the Nation (SWAN) outlined an accelerated bone loss during 1 year preceding and 2 years following the final menses at the lumbar spine, where BMD decreased by 3.2-5.6% in the late perimenopause and after menopause [54], respectively, while milder bone loss was observed at the hip [40,54]. In the present series, postmenopausal obesity did not change lumbar spine BMD, while it associated with a 12% and 13% lower BMD at the hip and femoral neck, respectively, as compared to the premenopausal state. Given the coordinated changes seen in bone-specific markers, anabolic hormones and lean mass, this site-specific effect of (early) menopause in obesity suggest an impairment in cortical porosity and cortical thickness as the potential consequence of menopause in obesity. These changes may alter the mechanosensory properties of bone tissue and local tissue repair responses, which progress to microdamage with aging [55], a circumstance that is consistent with the higher rates of ankle and upper leg (fragility) fractures seen in postmenopausal obesity [8,9]. Biochemical dynamics consistent with increased bone turnover in menopause involved increments in circulating CTX and NTX. Such increase, which has been already seen in obesity [35,56], predicts the rate of bone loss in postmenopausal women and the risk of osteoporosis in elderly women [57]. Collagen telopeptides were also related to osteocalcin, a non-collagen protein of bone matrix and one of the osteoblast-specific proteins, which was in turn oppositely associated to total body BMD. This relationship extends to obesity the role of osteocalcin as a marker of bone turnover rather than a specific marker of bone formation, as it functions to limit bone formation without impairing bone resorption or mineralization [58]. In our hands, an inverse association related osteocalcin and glucose metabolism, which could agree with the antagonizing effect of osteocalcin on hyperglycemia and β-cell dysfunction [59] and substantiate the converging effect of dysmetabolic changes of obesity with bone health. As such, a negative association related insulin resistance to BMC, which agrees with evidence of increasing insulin resistance across with decreasing BMC [4]. Insulin resistance may thus reflect a cardinal pathway promoting lower bone strength and increased risk of fracture in patients with diabetes mellitus [6,19]. Sclerostin and bone-regulating hormones: Being a factor that promotes bone resorption, we evaluated sclerostin and its regulation operated by perimenopausal hormone status so as to identify potential modulators of bone status in obesity. Previous studies found that sclerostin levels increase with age [60], menopause [61], insulin resistance and type 2 diabetes mellitus [34], while a debated association with obesity exists [35]. In our population, mean sclerostin levels were slightly increased in postmenopausal women, a finding that agrees with studies showing that sclerostin progressively increases after menopause [62]. With regard to sex steroids, it is known that estrogens suppress circulating and bone sclerostin mRNA levels while an opposite role has been observed for testosterone both in experimental and clinical studies [30,63,64]. In our hands, there was no association between estradiol and sclerostin levels while, surprisingly, sclerostin decreased with increasing testosterone. As such, menopause (positively) and testosterone (negatively) largely predicted sclerostin variability in multivariable regression analysis. We speculate that the negative control exerted by testosterone on sclerostin levels is peculiar of obesity and likely due to enhanced fat-derived androgen aromatization. It is known that obesity associates with variable androgen excess throughout the lifespan both in preand post-menopausal women, which does not often manifest with specific clinical signs or symptoms and originates from changes in the pattern of secretion and/or metabolism, transport, and/or local action of androgens [65][66][67]. With regard to bone health, sclerostin levels were found to be positively associated with lumbar spine BMD, a finding that was confirmed in multivariable regression analysis. This correlation expands to obesity similar findings previously described in different cohorts [30,32,33,68,69], although it seems to conflict with the positive association between sclerostin and NTX levels seen herein, as well as with the intrinsic osteopenic effects of sclerostin [26,28]. Further studies are required to clarify this issue. Within the comprehensive hormone panel screened in clinical setting, we failed to identify a role for vitamin D, adiponectin and leptin, as well as the somatotroph and thyroid axes, on bone health. Thus, it is conceivable that other peripheral factors, such as body composition, may play a role in bone health in obesity. Body composition: While lean body mass was decreased in postmenopausal women, percent fat body mass was similar between groups. Total body BMD was directly associated to lean body mass and BMI, whereas the association to body fat was the opposite. Furthermore, lean mass and menopause acted as independent predictors of total body BMD and explained nearly 50% of its variability. These findings highlight the prevalent role of dynamic biomechanical forces over passive loads on bone composition in obesity. Previous studies in adolescents and young women have linked bone strength to the dynamic loads from muscle force rather than fat mass [70,71]. In Chinese cohorts, the percentage of body fat was related to the risk of osteoporosis, osteopenia, and non-spine fractures independent of body weight [50]. Studies in different ethnicities confirmed that percent fat mass was inversely related to weight-adjusted bone mass after removal of age, sex, menopause status, exercise, and smoking [72]. Intriguingly, visceral fat accumulation was found to negatively predict femoral cross-sectional area, cortical bone area, principal moment maximum, principal moment minimum, and polar moment, while subcutaneous acted as a positive correlate [5]. Together, these findings corroborate the link between body fat and limb fractures in obese postmenopausal women. Caveats of the study: This study has some limitations. Since our investigation included severely obese Caucasian women in the perimenopausal age range, these findings cannot be generalized to other contexts or be extended to all obese women. The study design did not include a control group, due to the obese-restricted analysis of bone metabolism across menopause. Because of the cross-sectional design, any alteration would be random with respect to case status, and could probably underestimate the observed associations. We believe that the strict inclusion criteria and population selection of the study constitute a point of strength. Indeed, a future menopause transition study will appropriately reduce the risk of biased detection. Also, calcium intake and nutritional habits were not taken in appropriate account. Lastly and more notably, DXA has technical limitations in obesity associated with extraosseous soft tissue composition, so that BMD will appear to decrease more slowly in subjects with more soft tissue fat and vice versa [73]. Increased soft-tissue inhomogeneity is likely to occur from a greater and/or more variable amount of visceral fat surrounding the organs and subcutaneous fat around the hips in overweight and obese women. Increased percent body fat increases BMD precision errors, particularly at the lumbar spine and femoral neck regions of interest [74]. Conclusions This cross-sectional analysis suggests that obesity protects the lumbar spine from bone loss caused by menopause possibly through pathways involving sclerostin. The positive association seen between total body bone density and lean body mass offers a potential clue for preventive measures against osteoporosis in this setting. Menopause-transition studies are warranted to better discriminate the reasons for such selective changes in obesity. Institutional Review Board Statement: The study was conducted according to the guidelines of the Declaration of Helsinki, and approved by the Institutional Ethics Committee of Istituto Auxologico Italiano (protocol code 18C101_2011). Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request. Conflicts of Interest: The authors declare no conflict of interest.
2021-10-20T15:14:20.312Z
2021-10-01T00:00:00.000
{ "year": 2021, "sha1": "aacfa21076a81928771c91057a996912d9c50759", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-4418/11/10/1914/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "284852d2254e0007c048bb59c6db7ff53c832b19", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
53079491
pes2o/s2orc
v3-fos-license
Joint Learning for Emotion Classification and Emotion Cause Detection We present a neural network-based joint approach for emotion classification and emotion cause detection, which attempts to capture mutual benefits across the two sub-tasks of emotion analysis. Considering that emotion classification and emotion cause detection need different kinds of features (affective and event-based separately), we propose a joint encoder which uses a unified framework to extract features for both sub-tasks and a joint model trainer which simultaneously learns two models for the two sub-tasks separately. Our experiments on Chinese microblogs show that the joint approach is very promising. Introduction The analysis of emotions in texts is an important task in NLP. Traditional studies treat this task as a pipeline of two separated sub-tasks: emotion classification and emotion cause detection. The former identifies the category of an emotion and the latter detects the cause of an emotion. This separated framework makes each sub-task more flexible to deal with, but it neglects the relevance between the two sub-tasks. In this paper, we explore joint approaches which can capture mutual benefits across the relevant two sub-tasks. To the best of our knowledge, this work is the first attempt to incorporate both emotion classification and emotion cause detection into a unified framework. Although emotion classification relies on affective features and emotion cause detection needs event-based features, we propose a joint encoder which uses a unified framework to ex-tract features for both emotion classification instances and emotion cause detection instances. Then, we propose a joint model trainer which simultaneously learns two models for the two sub-tasks separately. The experiments on Chinese microblogs show that our joint approach can effectively learn models for both sub-tasks. Corpus In this paper, we use the human-labeled emotion corpus provided by Cheng et al. (2017) as our experimental data (namely Cheng emotion corpus). To better explain our work, we adopt twitter's terminology used in Cheng et al. (2017). Cheng emotion corpus can be considered as a collection of subtweets. For each emotion in a subtweet, all emotion keywords expressing the emotion are selected, and then the class and the cause of the emotion are annotated. The emotion categorization used in Huang et al. (2016) is adopted, which includes four basic emotions (i.e., joy, angry, sad and fearful) and three complex emotions (i.e., positive, neutral and negative). E.g. in the following example, the class of the emotion keyword (" ") is sad, and the cause of the emotion is "only I was at home again". Problem Formulation In this paper, both the emotion classification subtask (namely EClass) and the emotion cause detection sub-task (namely ECause) are clauselevel. Given an instance which is a clause in a subtweet, EClass assigns one of seven labels (i.e. six emotion classes and label 'non-emotion' which indicates the absence of an emotion) to the instance. Notice, because of the extremely low percentage of emotion 'fearful' (~0.6% in §3.1 Table 1), we ignore this emotion class in EClass. Given an instance which is a pair of <an emotion keyword, a clause in the subtweet>, ECause assigns a binary label to the instance to indicates the presence of a causal relation. Moreover, the clause-level EClass can effectively avoid the problem of multiple emotions because clauses are a kind of fine-grained texts. Furthermore, the input text of an EClass instance contains three sequences of words: the previous clause (i.e. PrevCL), the current clause (i.e. CurCL), and the following clause (i.e. FolCL). The previous clause and the following clause provide contextual information for the current clause. The input text of an ECause instance also has three sequences of words: the emotion keyword (i.e. EmoKW), the current clause (i.e. CauseCL) and the context between EmoKW and CauseCL. The emotion keyword serves as an anchor, the current clause gives the description of an event which may cause the emotion, and the context provides complemental information for the event. Moreover, each word is represented with a vector from our word embedding model which is trained with word2vec 1 and the tweet corpus of Cheng et al. (2017). The Joint Approach As shown in Fig. 2, there are two parts in our joint approach which is based on neural networks: a joint encoder (the lower part) which extracts feature representations for both EClass instances and ECause instances, and a linear decoder (the upper part) which assigns labels to instances according to their representations. Neural Networks In the joint encoder, there are two neural networks (the attention network and the LSTM network), and each neural network is composed of several layers: bidirectional LSTM (i.e. BiLSTM) and attention. The BiLSTM layer focuses on the extraction of sequence features, and the attention layer focuses on the learning of word importance (weights). Because of the feature sparse problem in our small-scaled experimental data, the attention network often cannot effectively extract features to represent an event (see §3.2). Thus, in our joint encoder, we use the attention network to extract affective features (e.g. " " in Fig. 1) and the LSTM network to extract event-based features (e.g. "I found that only I was at home again" in Fig. 1). The attention network: we implement the attention network used in Felbo et al. (2017), which includes two layers: a BiLSTM layer which extracts a sequence feature for each input word, and an attention layer which represents the input sequence using weighted words. The LSTM network: the network uses a BiLSTM layer to capture a sequence feature for each input word, and then uses the average of those features as the representation of the input sequence. In the linear decoder, there are two classification networks (CNet EClass and CNet ECause) for EClass and ECause separately. Each classification network uses a linear layer to build a probabilistic classification model. The Joint Encoder As shown in Fig. 2, there are two sub-encoders in our joint encoder: Encoder EClass (the left part) which provides a representation for an EClass instance, and Encoder ECause (the right part) which extracts a representation for an ECause instance. Given an instance, one sub-encoder extracts a main representation (through the black lines in Fig.2) and the other sub-encoder provides an auxiliary representation (through the blue or red lines in Fig.2). Then, the concatenation of the two representations serves as the final representation for the instance (i.e. h EClass or h ECause in Fig.2). In order to deal with the case that a main representation may be overwhelmed by its corresponding auxiliary representation, linear layers are used to reduce the dimensions of auxiliary representations. Moreover, there are three sequences of words either in the input text of an EClass instance or in the input text of an ECause instance. In order to effectively use these input sequences, a multi-channel structure is chosen, which encodes the input sequences one by one. Encoder EClass: given the three sequences of words in an EClass instance (PrevCL, CurCL and FolCL), the attention network is applied to CurCL to extract an affective representation, and the LSTM network is applied to PrevCL and FolCL separately to extract two event-based representations. Then, the concatenation of the three representations is used as the main representation (i.e. h main_EClass ). Furthermore, in order to extract more contextual information, the LSTM network of Encoder ECause is applied to PrevCL and FolCL (through the blue lines in Fig. 2) to extract the auxiliary representation (i.e. h aux_EClass ), which provides another event-based view for our emotion classification. Encoder ECause: in order to separately deal with the three sequences of words (EmoKW, CauseCL and Context) in an ECause instance, the LSTM network is applied to each input sequence and then the concatenation of the three representations is used as the main representation (i.e. h main_ECause ). Furthermore, for each input sequence (CauseCL or Context), the BiLSTM layer in the attention network is used to extract more eventbased features (through the red lines in Fig. 2), and those features serve as an auxiliary representation (i.e. h aux_ECause ) which provides another event-based view for our emotion cause detection. The Joint Model Trainer During training, two models (JMEClass and JMECause) are learned simultaneously for the two sub-tasks (EClass and ECause) separately. Model JMEClass contains Encoder EClass and CNet EClass, and Model JMECause contains Encoder ECause and CNet ECause. Although each model uses auxiliary representations from the other model, but the learning of the model focuses on its own parameters. In other words, gradient calculation is disabled along the dashed lines in Fig. 2. In each episode, the batch of input data is composed of two sets of instances: EClass subbatch containing only EClass instances and ECause sub-batch containing only ECause instances. Given the batch of data, the parameters of each model are updated according its corresponding loss function. E.g., Model JMEClass uses only the EClass sub-batch, and its loss function is the mean squared errors of the instances in the sub-batch. In our joint model trainer, the two models are optimized using their own loss functions as pipeline model training does, but they use up-to-date auxiliary representations from each other to help optimization. Experimental Setup In Cheng emotion corpus, there are ~3,000 subtweets, ~11,000 instances for EClass, and ~13,000 instances for ECause. Moreover, Table 1 lists the class distribution in Cheng emotion corpus for EClass. All experiments in this paper are trained and tested by 5-fold cross-validation on Cheng emotion corpus, and all the results reported are the average ones of 5-fold cross-validation performances. We use the precision, recall and Fscore as our evaluation metrics. However, because of the high percentage of label 'nonemotion' in EClass (see Table 1) and label '0' in ECause, similar to previous work Felbo et al., 2017;Cheng et al., 2017;, we report only the evaluation metrics of the six emotion classes for EClass and the evaluation metrics of label '1' for ECause. During our joint training process, the dimension of the word embeddings is 20; the output dimension of the BiLSTM layer used in both the LSTM network and the attention network is 128; the output dimension of the linear network is 8; the batch size is 32. The two models (JMEClass and JMECause) which are learned by our joint approach are compared with several pipeline models which are learned in a pipeline manner (i.e. either for EClass or for ECause) using one of the following state-of-the-art encoders.  ATT: the attention network in Fig.2 .  LSTM: the LSTM network in Fig.2.  ATT+LSTM: an hybrid encoder for emotion classification, which applies ATT to CurCL and LSTM to PrevCL and FolCL.  ConvMSMemnet: the encoder proposed by for emotion cause detection, which applies a convolutional multiple-slot deep memory network to CauseCL. Table 3: The performances of the six emotions in JMEClass Method Analysis In Table 2, Model ATT + CurCL out-performs LSTM + CurCL by 2.8% in F-scores, where ATT is a state-of-the-art encoder for emotion classification (Felbo et al., 2017). The significant performance improvement means that ATT can effectively extract affective features in CurCL. In fact, the emotion classification on Chinese microblogs can rely much on emotion keywords occurring in CurCL. E.g. ~50% emotional instances in our experimental data contains emoticons (e.g. " " in Fig. 1) in CurCL and those emoticons themselves are strong emotion indicators. Secondly, when different kinds of contextual information are incorporated to Model ATT + CurCL, different performance improvements obtain (0.2% for ATT + all and 0.7% for ATT+LSTM in F-scores). This indicates that for the emotion classification, the event-based features extracted by LSTM are more helpful than the affective features extracted by ATT, because contexts often provide the cause event of an emotion. E.g. in Fig. 1, the previous clause of " " contains its cause "only I was at home again". Finally, taking the advantage of the event-based features extracted by JMECause, JMEClass out-performs the best pipeline model (ATT+LSTM) by 0.7% in F-scores. This shows that it is important for the emotion classification to have an encoder which can effectively extract event-based features from contexts. In Table 3, the performance of a basic emotion (i.e., joy, angry or sad) is often better than the one of a complex emotion (i.e., positive, neutral or negative). However, in Table 1, the data size of a basic emotion is often smaller than the one of a complex emotion. This indicates that difference in performance is likely linked to differences in the emotional contents of labels rather than differences in data sizes. E.g. the complex emotion 'negative' (i.e. a collection of complex emotions with negativity, such as 'hate', 'anxious', and so on) is more diverse than the basic emotion 'sad', and this diversity in emotional contents brings more challenges to the detection of this complex emotion. Furthermore, even if both 'sad' and 'angry' are basic emotions and have similar data sizes in our experimental data, it seems much easier to detect 'sad' instances than to detect 'angry' instances. This is maybe because 'angry' is caused by more various events and it is more difficult to capture and utilize those cause events. Thus, it is necessary for the emotion classification to have an encoder which can extract the eventbased information of emotion cause from texts. Table 4 shows the performances of different emotion cause detection models, where "Sequence" lists the sequences of input words used by each model. In Table 4, JMECause outperforms the best pipeline model (LSTM) by 0.8% in F-scores. The LSTM encoder is a stateof-the-art approach used for emotion cause detection (Cheng et al., 2017). Furthermore, the performance improvement of JMECause is from the significant increasing in recalls (5.4%). This indicates that more emotion causes are correctly detected when the event-based features extracted by Model JMEClass are incorporated. Moreover, among all models, the two models (ATT and LSTM) achieve relatively high precision and relatively low recall, and ConvMS-Memnet obtains the lowest precision and highest recall. This means that both ATT and LSTM suffer from the feature coverage problem because some useful features cannot be extracted through their encoders, and ConvMS-Memn suffers from the feature quality problem maybe because its encoder cannot handle the informal writing style used in Chinese microblogs. Related Work In recent years, intensive studies have explored supervised machine learning approaches using various types of features for different-level emotion classification, such as document level (Alm et al. 2005;Li et al. 2014;Huang et al. 2016), sentence level or short text level (Tokushisa et al. 2008;Bhowmick et al. 2009;Xu et al. 2012;Wen and Wan, 2014;Felbo et al., 2017), and so on. Moreover, since both emotion and sentiment belong to affective feeling, some studies have explored the join learning of sentiment classification and emotion classification (Gao et al., 2013;. In the other hand, most of previous emotion cause detection studies is clause-based, which examine whether a clause around a given emotion keyword is a cause or not. Moreover, these studies (Chen et al., 2010;Ghazi et al., 2015;Cheng et al., 2017) focus on how to extract two kinds of features for supervised model learning: explicit expression patterns (e.g. "to cause", "for"), and implicit features which can reflect the causal relation. Conclusion In this paper, we focus on a joint learning approach to emotion classification and emotion cause detection on Chinese microblogs, and the experiments show such a joint approach is very promising.
2018-10-29T13:07:20.585Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "7a4e44a0c45591d3c237bb619fb8dfa90dfaf1e4", "oa_license": "CCBY", "oa_url": "https://www.aclweb.org/anthology/D18-1066.pdf", "oa_status": "HYBRID", "pdf_src": "ACL", "pdf_hash": "7a4e44a0c45591d3c237bb619fb8dfa90dfaf1e4", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
8753560
pes2o/s2orc
v3-fos-license
Vesicle trafficking maintains nuclear shape in Saccharomyces cerevisiae during membrane proliferation Disruption of vesicle trafficking results in distortion of nuclear shape and increased nuclear envelope surface area but doesn’t alter the nuclear/cell volume ratio. Introduction Organelles have distinct morphologies, which are thought to be important for their functions. Yet, little is known about how organelles establish and maintain their shape and size. For example, budding and fission yeast maintain a constant nuclear/cell (N/C) volume ratio, but the mechanism that controls this ratio is unknown (Jorgensen et al., 2007;Neumann and Nurse, 2007). The membrane of the nuclear envelope (NE) is continuous with the ER, yet how membrane is partitioned between the NE and ER such that the nucleus acquires its typical morphology is not understood. Over the past decade, much attention has been given to the link between nuclear morphology and pathology, as altered nuclear shape is associated with both premature and normal aging, and with certain types of cancers (for review see Webster et al., 2009). The nucleus of the budding yeast Saccharomyces cerevisiae is spherical in interphase, with the bulk of the chromatin occupying the majority of the nuclear volume. The nucleolus is confined to a crescent-shaped region at the nuclear periphery ( Fig. 1 A). Budding yeast that lack the SPO7 gene (spo7) have abnormally shaped nuclei, characterized by expansion of the NE, referred to as a "flare," which is confined to the NE region associated with the nucleolus (Fig. 1 A, arrow; Siniossoglou et al., 1998;Campbell et al., 2006). The absence of Spo7p results in membrane proliferation, in both the NE and the ER, because of the protein's role in regulating phospholipid synthesis (Santos-Rosa et al., 2005;O'Hara et al., 2006). Despite this membrane proliferation, the NE associated with the bulk of the chromatin maintains its normal shape (Fig. 1 A; Siniossoglou et al., 1998;Campbell et al., 2006). These observations suggest the existence of a mechanism for maintaining nuclear shape in regions of the NE that are associated with the DNA mass. Here we show that Golgi-associated vesicle trafficking is needed to maintain nuclear shape under conditions of membrane proliferation. When vesicle trafficking is disrupted, nuclei in cells experiencing membrane proliferation have multiple T he parameters that control nuclear size and shape are poorly understood. In yeast, unregulated membrane proliferation, caused by deletion of the phospholipid biosynthesis inhibitor SPO7, leads to a single nuclear envelope "flare" that protrudes into the cytoplasm. This flare is always associated with the asymmetrically localized nucleolus, which suggests that the site of membrane expansion is spatially confined by an unknown mechanism. Here we show that in spo7 cells, mutations in vesicle-trafficking genes lead to multiple flares around the entire nucleus. These mutations also alter the distribution of small nucleolar RNAassociated nucleolar proteins independently of their effect on nuclear shape. Both single-and multi-flared nuclei have increased nuclear envelope surface area, yet they maintain the same nuclear/cell volume ratio as wild-type cells. These data suggest that, upon membrane expansion, the spatial confinement of the single nuclear flare is dependent on vesicle trafficking. Moreover, flares may facilitate maintenance of a constant nuclear/cell volume ratio in the face of altered membrane proliferation. Golgi (Behnia et al., 2004;Setty et al., 2004). In mammals, Arl1p's localization also requires myristoylation of a glycine at position 2 (Lu et al., 2001;Burd et al., 2004), the analogous glycine mutated in our multi-flared strain. To confirm that mutations in ARL1 or SYS1 caused the multiflare phenotype, these genes were deleted in wild-type or spo7 ts cells and reexamined for genetic interactions with spo7 and effects on nuclear morphology. Both arl1 and sys1 proved to be synthetically lethal with spo7 ( Fig. S1 and Table 1, group A). Additionally, sys1 and arl1 caused multi-flared nuclei in spo7 ts cells, whereas they had subtle or nonexistent effect on nuclear shape when deleted in an otherwise wild-type cell (Fig. 1, D and E). Three-dimensional reconstruction by confocal microscopy of arl1 spo7 ts nuclei (n = 29) revealed that multi-flared nuclei are not fragmented, but rather have numerous lobulations (Fig. 1 E). To characterize chromatin distribution in multi-flared nuclei, histone H2B fused to mCherry red fluorescent protein (Htb2p-CR) was expressed in wild-type, arl1, spo7 ts , and arl1 spo7 ts strains expressing GFP-PUS1. After a 2-h temperature shift, the distribution of Htb2p-CR closely resembled that of GFP-Pus1p in all strains ( Fig. 1 F), which indicates that in multi-flared nuclei, the chromatin did not detach from the NE. To exclude the possibility that the multi-flare phenotype was a result of cell death, two independent approaches were used: we examined the fraction of dead cells before and after a temperature shift using a vital stain, and we followed the ability of multi-flared cells to recover when returned to the permissive temperature. When the vital stain methylene blue, which stains dead cells, was applied to arl1 spo7 ts cells before or after a 3-h temperature shift to 37°C, the percentage of dead cells at either temperature was very similar (8.0 ± 1.4% for 30°C and 10.5 ± 0.7% for 37°C), and viable cells with multi-flared nuclei were readily detectable at 37°C. Thus, after a 3-h temperature shift, the formation of multi-flared nuclei is not a result of cell death. To examine the recovery of cells with multi-flared nuclei, arl1 spo7 ts cells expressing GFP-Pus1p were returned to 30°C after a 3-h temperature shift to 37°C, then observed by microscopy at 1-h intervals, up to 5 h. For each time point, three-dimensional images of nuclei were overlaid on two-dimensional phase images of the cells to determine cell cycle progression (Fig. 2). Of 11 cells with multi-flare nuclei at t 0 , there were seven cells that underwent nuclear division (albeit with a prolonged cell division time; e.g., see Fig. 2 A), one cell that began nuclear division, one cell in which the nucleus did not divide ( Fig. 2 B), and two cells that died during the 5 h experiment (as determined by the dispersal of the GFP signal). In all cases, the shape of the nucleus changed during the time course (Fig. 2, A and B). Moreover, for cells that did divide, the flares and an increase in NE surface area, but the N/C volume ratio remains the same as in wild-type cells. These observations suggest that when faced with membrane proliferation, cells alter nuclear shape by forming projections that increase nuclear surface area without perturbing the N/C volume ratio. Moreover, vesicle trafficking is needed to confine these projections to the NE region associated with the nucleolus. Results and discussion A screen to identify proteins and processes that affect nuclear morphology The flare phenotype of spo7 cells suggests that there is a mechanism preventing nuclear membrane expansion in the region of the NE surrounding the chromatin. Thus, introducing a mutation that disrupts this mechanism to spo7 cells could result in a multi-flared nucleus ( Fig. 1 B). Such a mutation may not affect nuclear shape in an otherwise wild-type cell, where membrane biogenesis is tightly regulated. Additionally, nuclear processes could be disrupted in multi-flared nuclei, compromising cell viability. We therefore conducted a synthetic lethal screen for randomly generated mutations that reduced the viability of spo7 cells, followed by a secondary screen for mutations that affected nuclear morphology when Spo7p was inactive. We obtained roughly 150 strains carrying mutations that were synthetically lethal/sick with spo7. To examine nuclear morphology, these strains were transformed with a plasmid, pspo7-12, expressing a conditional allele of spo7 (spo7 ts ) that is hypomorphic at 30°C and nonfunctional at 37°C (Campbell et al., 2006). These strains also expressed the nucleoplasmic protein Pus1p fused to GFP (pGFP-PUS1; Hellmuth et al., 1998). Of the synthetic lethal strains, three strains had a multi-flared nuclear phenotype after a 3-h temperature shift to 37°C ( Fig. 1 C). ARL1 and SYS1, which are involved in endosome to late Golgi trafficking, affect nuclear shape Two of the multi-flared strains carried a premature stop codon in the SYS1 gene at amino acids Q18 or W108 (out of 203 amino acids). The third strain carried a mutation in the ARL1 gene, leading to a G2D substitution. ARL1 and SYS1 code for evolutionarily conserved proteins involved in retrograde vesicle trafficking from the endosome to late Golgi, as well as anterograde trafficking of Gas1p to the plasma membrane (Tsukada and Gallwitz, 1996;Lee et al., 1997;Rosenwald et al., 2002;Panic et al., 2003;Liu et al., 2006). Sys1p is a transmembrane domain protein required for localization of Arl1p to the late the yeast nucleus showing the NE (green), the nucleolus (red), and the DNA (blue) in wild-type and spo7 cells, and hypothetical nuclear phenotypes that would result from a mutation leading to a multi-flare phenotype with (right) or without (left) loss of DNA tethering to the NE. (C) Nuclear phenotype of strain MWY254, carrying a mutation that is synthetically lethal with spo7, observed after a 2-h temperature shift to 37°C. Nuclear morphology was assessed with the nucleoplasmic protein Pus1p fused to GFP (GFP-Pus1p). For comparison to nuclei of wild-type and spo7 ts cells, see D or F. (D) Nuclear phenotypes by GFP-Pus1p, associated with arl1 and sys1 mutations, alone or in combination with spo7 ts , after a 2-h temperature shift to 37°C. (E) Three-dimensional reconstruction of nuclei from arl1 (top left), spo7 ts (top right), and two arl1 spo7 ts (bottom) cells. Cells were shifted to 37°C for 2 h. (F) Spatial distribution of chromatin in wild-type, arl1, spo7 ts , and arl1 spo7 ts strains. Chromatin is visualized by the histone H2B fused to mCherry (Htb2p-CR), and nuclear morphology is detected by GFP-Pus1p. Cells were shifted to 37°C for 2 h. Bars: (A, C, D, and F) 2 µm; (E) 1 µm. Vesicle trafficking via the Golgi affects nuclear shape and distribution of nucleolar proteins If Arl1p and Sys1p affect nuclear morphology via their known role in vesicle trafficking, then inactivation of other genes in multi-flared phenotype was lost, and nuclei formed either a single flare (spo7 phenotype; Fig. 2 A and not depicted) or round shape after division (not depicted). Thus, the multi-flare nucleus is not a consequence of cell death but is rather a dynamic structure that can be resolved once cells are returned to permissive conditions. The strength of the genetic interaction between spo7 and each mutation was assessed by the spot assay (Fig. S1), ranging from synthetic lethality (++++) to no interaction (). ( Table I and Fig. S1). Not all vesicle-trafficking genes led to a multi-flare phenotype in spo7 ts nuclei, although all mutations that led to a multi-flared phenotype had genetic interactions with spo7. Thus, only a subset of vesicle-trafficking complexes and pathways, especially those associated with the Golgi, affect nuclear shape under our experimental conditions. Given the association between the spo7 flare and the nucleolus ( Fig. 1 A), we examined the localization of the nucleolus in multi-flared nuclei (spo7 ts cells deleted for ARL1, YPT6, or COG8) using Nsr1p fused to mCherry red fluorescent protein (Nsr1p-CR). At 30°C, all strains exhibited either a normal crescent nucleolus or single flare filled with Nsr1p-CR (referred to as extended Nsr1p-CR; Fig. 3 A, arrows; and not depicted). After a 3-h shift to 37°C, >80% of cells in all three double mutant strains showed a coalescence of Nsr1p-CR to a single nuclear focus (Fig. 3 A, arrowhead; and not depicted). The temperature-dependent coalescence of Nsr1p-CR was uncommon in wild-type and spo7 ts cells (observed in 10.0 ± 2.8% and 7.0 ± 1.4% of cells, respectively), but was prevalent in arl1, ypt6, and cog8 single mutants (55.0 ± 5.7%, 97.0 ± 0%, and this pathway may have a similar effect. Genetic interactions between the deletion of NEM1, which codes for the phosphatase regulated by Spo7p, and other endosome to late Golgi-trafficking genes, including ARL3, YPT6, and RIC1, have been described previously Schuldiner et al., 2005). Consistent with these reports, deletions of ARL3, YPT6, or RIC1 were synthetically lethal with spo7 ( Fig. S1 and Table I, group B). Importantly, these gene deletions also caused the multi-flared phenotype when combined with spo7 ts (Table I, group B), but had only subtle effects on nuclear shape in an otherwise wild-type strain (unpublished data). Thus, the involvement of endosome-tolate Golgi-trafficking proteins in nuclear morphology extends beyond Arl1p and Sys1p. To determine if other vesicle-trafficking pathways also affect nuclear morphology, we tested additional genes known to interact genetically with spo7 and/or nem1 (Table I, group C; Tong et al., 2004;Schuldiner et al., 2005) and genes belonging to well-characterized vesicle-trafficking complexes (Table I, group D). The deleted genes ranged from having no genetic interaction to being synthetically lethal with spo7 the same (Table II). However, nuclei of both spo7 ts and arl1 spo7 ts cells had greater surface area than nuclei of wild-type cells (Table III). To compare nuclear surface areas between cells of potentially different sizes, we normalized the measured surface area for each nucleus to a calculated surface area had the nucleus been a perfect sphere (see Materials and methods). For wild-type cells, the mean measured and calculated nuclear surface areas were very similar (measured/calculated = 1.08 ± 0.04; Table III), as expected from the nearly spherical shape of wild-type nuclei. In contrast, the surface areas of spo7 ts and arl1 spo7 ts nuclei were on average 1.27 and 1.48 times greater than a sphere, respectively (Table III; P < 0.0001). Because the N/C volume ratios for wild-type, spo7 ts , and arl1 spo7 ts are the same, this means that for a given cell size, the nucleus of a multi-flared cell will have, on average, a surface area that is 1.48 times greater than that of the spherical wild-type nucleus. We speculate that by sequestering excess nuclear membrane to one or more flares, spo7 ts and arl1 spo7 ts cells can maintain a normal N/C volume ratio. It is likely that internal nuclear organization is important for nuclear function. This is true in metazoans, where mutations in nuclear lamina proteins affect both nuclear shape and cell fitness (for review see Webster et al., 2009). Yeast and plants do not have genes homologous to nuclear lamina genes, and thus it is of interest to understand what determines nuclear organization in these organisms, findings that could extend to mammalian cells. Here, we identified vesicle-trafficking genes that affect nuclear morphology. Mutations in these trafficking genes are synthetically lethal with spo7. Although it is tempting to speculate that gross alteration in nuclear structure, and consequently abnormal chromatin distribution (Fig. 1 F), is the underlying cause, at this point we cannot rule out the possibility that synthetic lethality is caused by an uncharacterized defect. Many of the genes that we have identified as affecting nuclear morphology are involved in trafficking to and through the Golgi. 94.5 ± 2.1%, respectively; Fig. 3 B and not depicted). This focal appearance was also observed with two other common nucleolar markers: Gar1p and Utp10p (unpublished data). However, by electron microscopy, overall nucleolar structure remained normal in the arl1 cells at 37°C (n = 23; Fig. 3 C). Though they were coincidental, the focal appearance of these nucleolar markers (all of which are small nucleolar RNA-associated proteins) and the multi-flared phenotype were independent of each other: at early time points after the temperature shift, we detected both multi-flare nuclei with extended Nsr1-CR (Fig. S2 B) and single-flare nuclei with an Nsr1-CR focus (Fig. S2 C). It is possible that the focal appearance of certain nucleolar proteins in vesicle-trafficking mutants is related to the defect in ribosome synthesis reported for ypt6 and ric1 mutant cells (Li and Warner, 1996;Mizuta et al., 1997). Single-and multi-flared nuclei maintain a normal N/C volume ratio despite an increase in NE surface area The size of the yeast nucleus scales in proportion to cell size (Jorgensen et al., 2007;Neumann and Nurse, 2007). In theory, perturbations to nuclear morphology caused by membrane proliferation could alter the N/C volume ratio. Alternatively, the N/C volume ratio could remain constant, for example, if the excess nuclear membrane is sequestered in protrusions or invaginations. In the case of spo7 ts and arl1 spo7 ts cells, we found the latter to be true. Nuclear surface area and volume were obtained using three-dimensional reconstruction (see Materials and methods), as shown in Fig. 1 E. The N/C ratio was not affected by the temperature at which cells were grown, growth media, or imaging fixed versus live cells (unpublished data, and see Materials and methods). For reasons that are currently unclear, spo7 cells are significantly larger than the three other strains tested (Table II). Nonetheless, the N/C volume ratios of wild-type, spo7 ts , and arl1 spo7 ts strains were nearly 36.3 ± 15.8 4.1 ± 0.9 12 ± 3* spo7 ts 53.6 ± 19.0 6.6 ± 1.6 14 ± 4 arl1 spo7 ts 37.9 ± 16.4 6.0 ± 1.8 18 ± 7 a Asterisk denotes significance (Student's t test, P = 0.0016) compared to the mean percentage of nuclear volume of cell volume in wild-type cells. Table III. Nuclei of spo7 ts and arl1 spo7 ts cells have greater surface area than nuclei of wild-type cells Genotype Actual surface area Calculated (spherical) surface area Fold increase over sphere a µm 2 µm 2 WT 13.6 ± 2.8 12.6 ± 2.8 1.08 ± 0.04 arl1 13.9 ± 2.6 12.4 ± 1.8 1.12 ± 0.09 spo7 ts 21.6 ± 4.6 17.0 ± 2.7 1.27 ± 0.12* arl1 spo7 ts 23.7 ± 6.2 15.9 ± 3.1 1.48 ± 0.13* a Asterisks denote significance (Student's t test, P < 0.0001) compared to mean fold increase of wild-type nuclear surface area over a perfect sphere. Isolation of synthetic lethal mutations and identification of strains with abnormal nuclear morphology Strains JCY607 and TH4201 were mutagenized with ethyl methanesulfonate to 50% cell viability, as described previously (Ross and Cohen-Fix, 2003), and screened for mutations causing synthetically lethality with spo7 via a sectoring assay (Bender and Pringle, 1991). Synthetic lethality was confirmed by checking sensitivity of strains to growth on synthetic media plates containing 0.05% 2-amino-5-fluorobenzoic acid (FAA; Sigma-Aldrich), which is toxic to cells expressing Trp1p from the pSPO7 plasmid (Toyn et al., 2000). To score nuclear morphology, the pSPO7 plasmid was swapped with pspo7-12. Strains were then transformed with pGFP-PUS1 and examined for nuclear morphology after a 2-h temperature shift to 37°C. Strains showing a >20% penetrance of an abnormal nuclear phenotype were backcrossed and rescored to ensure that the synthetic lethality and nuclear phenotype were linked and caused by a mutation in a single locus. Cloning mutations by complementation of synthetic lethality Strains were transformed with a yeast genomic library and cloned on a LEU CEN plasmid (ATCC 37323), then grown on plates containing 5-fluoroorotic acid (5-FOA; Zymo Research) to select for clones that allowed growth in the absence of the pspo7-12 plasmid (which also codes for URA3; Boeke et al., 1984). Genes present on the suppressing genomic clones were sequenced in the mutagenized genomes to identify the ethyl methanesulfonate-generated mutations. Fluorescence microscopy and measurements Cells were fixed in YPD containing 4% paraformaldehyde (Electron Microscopy Sciences) for 1 h at 23°C, washed with 1× PBS, and stored at 4°C for no more than 72 h. For fluorescent and differential interference contrast imaging, fixed cells were incubated briefly in 0.1% Triton X-100 immediately before observation and mixed with an equal volume of Vectashield with DAPI (Vector Laboratories). Images were captured with a microscope (E800; Nikon), equipped with a 100× Plan Fluor DIC H objective lens, using a charge-coupled device camera (CCD; C4742-95; Hamamatsu Photonics) and operated by IPLab 3.0 software (Scanalytics, Inc.). Image overlays in Fig. 1 A were done with the pseudo-colored images using the IPLab software. For three-dimensional reconstruction and volume analyses of cells in G1 phase of the cell cycle, fixed cells were incubated for 5 min in 100 µg/ml calcofluor white (Sigma-Aldrich) at room temperature, washed with 1× PBS, mixed with an equal volume of Vectashield mounting medium, and aliquoted onto a 3% agarose pad. For live cell recovery from 3 h at 37°C, cells were mounted on 3% agarose pads containing synthetic complete media and maintained at 30°C with an objective heater controller (Bioptechs). Images were captured by spinning-disk confocal microscopy using a microscope (Eclipse TE2000U; Nikon) equipped with a 100× Plan-Apochromat objective lens. The imaging system also included a spinningdisk unit (CSU10; Yokogawa) and an electron microscope CCD camera (C9100-13; Hamamatsu). Confocal images at 0.2-µm intervals were acquired using IPlab 4.0 software. Cell and nuclear volumes, as well as nuclear surface area, were acquired with Imaris 7.0.0 software (Bitplane). Images were adjusted using "Auto blend" followed by smoothing using "Median filter 3 × 3 × 1." A surface was then created using a threshold of absolute intensity. Nuclear volume and surface area of the created surfaces were calculated by Imaris 7.0.0. For cell volume, cells were considered to be prolate spheroids with a volume equal to (4/3)a 2 b, where a is the short radii and b is the long radius. Radii were determined by taking half of the longest diameters for each cell as measured in an individual stack using the "slice" view in Imaris 7.0.0. Surface areas of hypothetical "spherical" nuclei were calculated for each nucleus from the measured nuclear volume. Measurements were taken from 21 cells of each genotype after a 3-h temperature shift to 37°C. We calculated averages for cell volumes, nuclear volumes, and the percentage of nuclear volume out of the cell volume. For each cell, the actual nuclear surface area was divided by a calculated nuclear surface area (deduced from nuclear volume measurements and It is possible that yet unknown proteins must get modified as they traffic through the Golgi before they can function in nuclear shape maintenance. Alternatively, membrane trafficking may be important for the redistribution of membrane or specific lipids to maintain homeostasis of the endomembrane system. For example, defects in trafficking membrane from the ER lead to ER expansion and abnormal nuclear morphology (Kimata et al., 1999). In principle, defects in Golgi trafficking could lead to membrane expansion through the induction of the unfolded protein response (Mousley et al., 2008). However, induction of the unfolded protein response in spo7 cells did not cause a multi-flare phenotype (unpublished data). Moreover, arl1 alone did not cause expansion of the ER, nor did it exacerbate the ER defect of spo7 mutant cells (Fig. 3 C and not depicted). Lipid trafficking from the Golgi is also necessary for the membrane expansion leading to the formation of autophagosomes (Abudugupur et al., 2002;Reggiori et al., 2004;Geng et al., 2010;Ohashi and Munro, 2010;van der Vaart et al., 2010), which illustrates that the Golgi can function not only in protein sorting, but also in endo-membrane distribution. We did observe in arl1 and arl1 spo7 ts cells a significant increase in the punctate distribution of the endosome marker, Vps24p fused to GFP, as well as a modest increase in the number of early Golgi structures, labeled by Vrg4p fused to GFP (unpublished data). Additionally, mutations in arl1 have been found to cause fragmentation of the vacuole (Abudugupur et al., 2002;Rosenwald et al., 2002). These observations indicate that disrupting endosome-to-Golgi trafficking leads to expansion and/or fragmentation of organelles within the endo-membrane system, which suggests that the Golgi could affect the NE through its role in membrane distribution. In this study, we observed a relative increase in the surface area of the nucleus in arl1 spo7 ts cells (Tables II and III). Collectively, our data provide a novel link between vesicle trafficking and nuclear morphology, and raise the possibility that the Golgi regulates the availability of nuclear membrane, either directly or indirectly. Materials and methods Media and growth conditions Cells containing plasmids were initially grown in synthetic complete liquid media (2% glucose, 0.17% yeast nitrogen base, 2.5% ammonium sulfate, and the appropriate amino acid and nucleoside mix) to ensure plasmid maintenance. Before fixation, cells were grown overnight at 30°C in YPD (1% yeast extract, 2% peptone, and 2% glucose) to mid-log phase and then shifted to 37°C for the times indicated. For the vital stain assay, cells were grown to mid-log phase in YPD with or without a 3-h temperature shift, resuspended in 0.2 µg/ml methylene blue (Sigma-Aldrich) in 0.05 M KH 2 PO 4 for 5 min, and scored for dye accumulation. Strains All strains were in the W303 background (see Table S1 for strain list). Strains containing spo7 pspo7-12 (see the following paragraph) were referred to in the text as spo7 ts . Gene disruptions and fusions were done according to Longtine et al. (1998) and Goldstein and McCusker (1999). Genomic fusions of the gene coding Discosoma sp. red fluorescent protein, mCherry, were performed by transformation of a PCR product from the plasmid pPW57, a gift from J. Thorner (University of California, Berkeley, Berkeley, CA), modified from pPW58 (Westfall and Thorner, 2006) by switching URA3 with his5 + . Plasmids pGFP-PUS1: pASZ11-ADE2-PUS1-GFP, encoding GFP-PUS1 (Hellmuth et al., 1998), was modified by PCR-mediated replacement of the ADE2 gene assuming the object is a sphere) to determine the fold increase in actual surface area over that of a sphere. Statistical analyses in Tables II and III were done using an unpaired Student's t test with Bonferroni correction. Note that in our hands, the nucleus of wild-type cells occupies a greater fraction of the cell volume than described previously (16% here compared with 7% in Jorgensen et al., 2007). This difference is unlikely to be the result of the temperature at which cells are grown, the growth media, or the imaging of fixed versus live cells (see main text). Additionally, the range for our measured nuclear volumes, 4.3 ± 1.4 µm 3 , is close to that described by Jorgensen et al. (2007), 2.91 ± 0.85 µm 3 , and our cell volumes are consistent with data described previously for G1 cells (Jorgensen et al., 2004). Electron microscopy Cells were fixed and processed for electron microscopy as described previously (Rieder et al., 1996). The grids were examined on a transmission electron microscope (EM 420; Philips), and images were collected with a Soft Imaging System MegaView III CCD camera (Olympus). Figures were assembled in Photoshop (Adobe) with only linear adjustments in brightness and contrast. Online supplemental material Figs. S1 shows testing genetic interactions between spo7 and deletions of vesicle-trafficking genes. Fig. S2 shows that formation of Nsr1p-CR foci and multi-flared nuclei in arl1 spo7 ts cells are independent of each other. Table S1 lists the strains used in this study. Online supplemental material is available at http://www.jcb.org/cgi/content/full/jcb.201006083/DC1.
2017-04-20T04:27:39.436Z
2010-12-13T00:00:00.000
{ "year": 2010, "sha1": "1b538e278e594983fe54717ac40cb398ba5f6421", "oa_license": "CCBYNCSA", "oa_url": "http://jcb.rupress.org/content/191/6/1079.full.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "ed40361b6309e6cf8cda34b512d1e52eb6a87010", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
15620336
pes2o/s2orc
v3-fos-license
Daytime running lights in the USA: what is the impact on vehicle crashes in Minnesota? Background Daytime running lights (DRLs) are a safety feature intended to reduce crashes by increasing the contrast between vehicles and the background. Aims The purpose of this study was to determine whether there is an association between vehicles in the USA being equipped with DRLs and crash rates. Methods This was a retrospective study using the Minnesota Department of Transportation (MNDOT) Crash Database from 1995 to 2002. Crash reports included in the analyses were limited to accidents involving vehicles 1995 or newer (DRLs not available on prior models) and limited to ideal conditions: (1) daylight, (2) optimal visibility, and (3) dry road surface. The vehicle identification number (VIN) was used to determine the make, model, and year. This information was cross-referenced with a National Highway Traffic Safety Administration table of manufacturer listed DRL conditions to determine vehicle DRL status. Crude crash rates for vehicles were calculated relative to the number of all registered vehicles in Minnesota in 2004, for models 1995–2002. Ninety-five percent confidence intervals (CI) for the rates were constructed assuming a Poisson error distribution. Results During 1995–2002, there were 184,637 vehicles (1995 or newer) with identifiable VINs involved in accidents which occurred under the specified test conditions. Of these vehicles, 37,909 were determined to have standard DRLs and 146,728 were determined to be models without DRLs (including those listed as DRL optional). The crash rate among vehicles without standard DRLs was 1.73 (95% CI: 1.71–1.75) times higher than the rate for vehicles with standard DRLs. The rate ratio was also significant for fatal vehicle crash rates 1.48 (95% CI: 1.23–1.76). Conclusion Minnesota vehicles equipped with DRLs were associated with a statistically significant lower crash rate compared to vehicles without DRLs from 1995 to 2002. Introduction Daytime running lights (DRLs) are a safety feature intended to reduce crashes by increasing the contrast between vehicles and the background. Currently, Finland, Sweden, Norway, Canada, Denmark, Hungary, and Iceland all require vehicle lights during daytime hours. Most of the studies of the effectiveness of DRLs have been done in Scandinavia. Finland was the first to institute DRL legislation in rural areas, and literature reports a 27% crash rate reduction [1]. In 1977, Sweden started requiring the use of daytime vehicle lights on all roads, and reduction of crash rates from 9 to 21% were reported by Andersson and Nilsson [2]. Norway began to require installation of DRLs in all new cars beginning in 1985 and use of daytime lights on all vehicles by 1988. A 15% crash rate reduction for crashes involving more than one vehicle was later reported by Elvik [3]. Lastly, Denmark has required use of DRLs on all roads since 1990, with a statistically significant 37% rate reduction for crashes involving a left turn in a study by Hansen [4]. A 1995 paper by Theeuwes and Riemersma criticized the odds ratio methodology of all these early studies [6]. In response, a meta-analysis of 17 studies by Elvik estimated a decrease in crash rate of 10-15% for multivehicle crashes and total crash reduction of 3-12% [7]. The first studies of DRLs in North America were done on fleet vehicles. In a study by Stein, corporate fleet vehicles in the USA equipped with DRLs had 7% fewer relevant crashes compared to the group of fleet vehicles without DRLs during 1983-1984 [8]. Sparks et al. reported 15% crash reduction in government fleet vehicles in Canada equipped with DRLs [9]. By December 1989 all newly manufactured vehicles in Canada were required to be equipped with DRLs, and within 4 years, Arora et al. reported a statistically significant 8% reduction in relevant collisions [10]. DRLs in non-fleet passenger vehicles have been introduced more recently in the USA. In 1995, Volvo and Saab were first to install DRLs on all their new cars sold in the USA. By 1997, all new Suzuki, Volkswagen, and General Motors models included DRLs. Yet a decade later, only a few studies and reports have been published regarding the use of daytime headlights in the USA. Farmer and Williams used a casecontrol method to analyze multiple vehicle daytime crashes in nine states for a group of vehicles equipped with DRLs. They reported that these vehicles were involved in 3.2% fewer crashes [11]. The National Highway Traffic Safety Administration (NHTSA) reported a preliminary assessment in June 2000. Using the Fatality Analysis Reporting System (FARS), they analyzed fatal crashes in four states from 1995 to 1997. They found no significant difference in risk of two vehicle opposite-direction crashes comparing vehicles with DRLs to vehicles without DRLs. However, using the State Data System (SDS) from Florida, Maryland, Missouri, and Pennsylvania, a statistically significant 7% reduction in risk for relevant (including crash subtypes presumably affected by DRLs, such as opposite-direction) nonfatal crashes was identified, and DRL-equipped vehicles were associated with 28% fewer pedestrian fatalities [12]. In this study, we tested the hypothesis that passenger vehicles in the USA equipped with DRLs are associated with decreased crash rates compared to those without DRLs under "high test" weather (daylight and optimal visibility) and road (dry) conditions. Methods This was a retrospective study using the Minnesota Department of Transportation (MNDOT) Crash Database from 1995 to 2002. Vehicle crashes, for which police reports were filed, were cross-verified and matched against the NHTSA archival registry maintained for research purposes. Definitions of "crash" and "fatality" were based on the terminology referenced by MNDOT Traffic Accident Report (form version: PS-32003-10) as documented by police authorities at the time of the actual accident. Specifically, fatalities recorded were for any scene deaths immediately related to the motor vehicle collision. Crash reports included in the analyses were limited to crashes involving automobiles, pickups, and vans and crashes that occurred under high test weather and road conditions all defined a priori. The high test conditions included: (1) temporal limitations to daylight, defined as dawn to dusk, (2) optimal visibility, defined as clear or cloudy, and (3) road surface identified as dry. Studied vehicles were also limited to models 1995 and newer, since prior models did not have DRLs. The vehicle identification number (VIN) of vehicles involved in crashes was used to determine the specific make, model, and year. This information was crossreferenced with a NHTSA table of manufacturer listed DRL conditions to determine each vehicle DRL status. Crash rates for vehicles with standard DRL and without DRL feature were calculated as relative to the number of all registered vehicles in Minnesota with or without the DRL feature, respectively. The number of registered vehicles in Minnesota was determined from the MNDOT vehicle registration file obtained in 2004 for models 1995-2002. In 2004, the number of these vehicles, with and without standard DRLs, was 788,840 and 1,763,134, respectively. MNDOT does not keep a retrospective database of registered vehicles. Therefore, the only total number of vehicles which can be obtained is a number in real time. This number was obtained in 2004, at the time the study was started. Use of this single-year denominator assumes that the proportion of vehicles with and without the standard DRL feature was constant over the years of this study. Although the rates will be overestimated since the denominators represent a single year, the rate ratios will be appropriate if the previous assumption holds. Ninety-five percent confidence intervals (CI) for the rates were constructed using a Poisson error distribution. The two rates were compared using a two-sided F test for the ratio of two Poisson random variants. Results During the 7-year study period, 184,637 vehicles (1995 or newer) had identifiable VINs and were involved in accidents that occurred under the specified test conditions. Of these vehicles, 37,909 were determined to have standard DRLs and 146,728 were determined to be models without DRLs (Fig. 1). The standard DRL group had a higher percentage of automobiles vs pickups and vans (78.5%) than the group without standard DRLs (66.3%). Other accident characteristics were similar between the standard vs nonstandard DRL groups ( Table 1). Vehicle crashes were divided by the type of collision, including collisions with other vehicles, pedestrians, and bicycles. (Table 1). Of the 37,909 vehicles with standard DRLs involved in accidents, 34,475 were involved in collisions with other vehicles. This is a crash rate of 437 per 10,000 vehicles (95% CI: 432-442). Of the 146,728 vehicles without standard DRLs involved in accidents, 133,892 were involved in collisions with other vehicles. This is a crash rate of 759 per 10,000 vehicles (95% CI: 755-764). The rate ratio for vehicles involved in collisions with other vehicles was 1.74 (95% CI: 1.72-1.76; p<0.001) ( Table 2). A total of 230 vehicles with standard DRLs were involved in collisions with pedestrians, which is a crash rate of 2.9 per 10,000 vehicles (95% CI: 2.5-3.3). In comparison, a total of 911 vehicles without standard DRL were involved in collisions with pedestrians, which is a crash rate of 5.2 per 10,000 vehicles (95% CI: 4.8-5.5) ( Table 2). The rate ratio for vehicles involved in collisions with pedestrians was 1.77 (95% CI: 1.53-2.05; p<0.001). Finally, for collisions with a bicycle, there were 358 vehicles with standard DRLs involved in such collisions for a crash rate of 4.5 per 10,000 vehicles. Without standard DRLs, 1,379 vehicles were involved in collisions with bicycles for a crash rate of 7.8 per 10,000 vehicles. The rate ratio for vehicles involved in collisions with bicycles is 1.72 (95% CI: 1.54-1.94; p<0.001. (Table 2) Discussion Based on our study results, DRLs had an association with vehicle crash reduction in motor vehicle collisions, consistent with two previous studies. Farmer and Williams showed that vehicles equipped with DRLs were involved in 3.2% fewer crashes [11]. The NHTSA reported a 7% reduction in risk of relevant nonfatal crashes [12]. Our crude crash rate reduction as reflected by the rate ratio was notably higher than in both of these previous studies. This may be due to the fact that our study was a retrospective study of all vehicle crashes in Minnesota during the time period, whereas the preceding studies cited employed a case-control methodology to compare specific subsets of vehicles with and without DRLs. Our study shows a statistically significant reduction in fatal crashes for vehicles with DRLs versus those without DRLs. The NHTSA report found no significant reduction in fatal crashes [5]. This latter finding may be attributable to the relatively low numbers of vehicles involved in fatal crashes compared to all crashes reflected in the NHTSA study denominator. Larger studies with greater numbers of fatal crashes would be helpful to further delineate the impact of DRLs in fatal crashes where causation is likely multifactorial. Vehicles that collided with other vehicles showed lower crash rate in vehicles with standard DRLs compared to those without DRLs. This is a subtype of crashes that would expectedly be impacted by the DRL feature, as increased visibility of other vehicles would likely decrease collisions [13]. In addition, the rate of vehicles colliding with pedestrians may also be predictably lowered by the use of DRLs because these vehicles may be increasingly visible to pedestrians. Our study does demonstrate a reduction in vehicle-pedestrian crashes not inconsistent with the 28% reduction rate reported by the NHTSA [5]. To our knowledge, no traffic law revisions, such as lower speed limitations, or newer primary seat belt stop legislations, affected our crash rates. Specifically, there were no traffic law changes in Minnesota identified during the study period. Our study has at least four limitations. First, unknown DRL status excluded vehicles from analysis, and incremental value of layered standards or options in crash prevention is not quantified. Second, snapshots of data streams may fail to demonstrate the whole picture in complex large volume relationships over time. We used a denominator from vehicles registered in 2004 and assumed a similar proportion of vehicles with DRL standard to those without DRL standard for all of the study years. We believe the proportion of vehicles remained reasonably constant over the study period, but there is no retrospective database to confirm this. Third, use of "best-case scenario" assumptions to disprove the null hypothesis may limit capture of other significant differences between groups. Lastly, confounders related to the driver or vehicle parameters such as age, experience, or safety record may significantly affect associations. Driver and vehicle files containing private or privileged information (insurance status, license qualifications, organ donor information, health outcomes, etc.) were not accessible for the purpose of this research study. Conclusion Minnesota vehicles equipped with DRLs were associated with a statistically significant decrease in crash rates compared to vehicles without DRLs, model year 1995 or newer, from 1995 to 2002. These improvements followed nonmandatory DRL implementation by select manufacturers in the USA.
2014-10-01T00:00:00.000Z
2010-03-01T00:00:00.000
{ "year": 2010, "sha1": "d4a42376a30ff40adf40144249fc224d460affa2", "oa_license": "CCBYNC", "oa_url": "https://link.springer.com/content/pdf/10.1007/s12245-009-0151-6.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "26c9982e5fca4475de8af7a1184ad90d2bd654dd", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
220252519
pes2o/s2orc
v3-fos-license
Day 15 and Day 33 Minimal Residual Disease Assessment for Acute Lymphoblastic Leukemia Patients Treated According to the BFM ALL IC 2009 Protocol: Single-Center Experience of 133 Cases Introduction: Childhood acute lymphoblastic leukemia (ALL) is a hematologic malignancy characterized by the acquisition of several genetic lesions in the lymphoid progenitors with subsequent proliferation advantage and lack of maturation. Along the years, it has been repeatedly shown that minimal residual disease (MRD) plays an important role in prognosis and therapy choice. The aim of the current study was to determine the prognostic role of MRD in childhood ALL patients in conjunction with other relevant patient and disease characteristics, thus showing the real-life scenario of childhood ALL. Patients and Methods: The retrospective study includes childhood ALL patients that were treated according to the BFM ALL IC 2009 between January 2016 and December 2018 at the Fundeni Clinical Institute, Bucharest, Romania. Results: None of the variables significantly influenced the induction-related death in our study. None of the variables independently predicted relapse-free survival (RFS) with the highest tendency for statistical significance being represented by poor prednisone response. Non-relapse mortality (NRM) was independently predicted by age, prednisone response, and day 33 flow cytometry-MRD (FCM-MRD). Overall survival (OS) was independently predicted by prednisone response and day 33 FCM-MRD. Event-free survival (EFS) was independently predicted by age, prednisone response, and day 33 FCM-MRD. Conclusion : Prednisone response, day 15 FCM-MRD, day 33 FCM-MRD, and the risk group represent the most important factors that in the current study independently predict childhood ALL prognosis. INTRODUCTION Acute lymphoblastic leukemia (ALL) is the most common malignancy in children, responsible for 30% of all pediatric neoplasms (1). Childhood ALL is characterized by the acquisition of several genetic lesions in the lymphoid progenitors with subsequent proliferation advantage and lack of maturation. The clinical presentation of this condition is generally the result of bone marrow infiltration by lymphoid blasts and involvement of extramedullary organs (2). Childhood ALL mortality has decreased since the 1970s (3) with some patient categories reaching cure rates of 90% as a direct consequence of better patient stratification and therapeutic approach (4). One of the most important variables in patient stratification is minimal residual disease (MRD) assessment and risk stratification (5). Initially, MRD was assessed by a few study groups, but currently most ALL protocols include MRD evaluation (6). The sensitivity of MRD detection ranges according to the technique used to detect it. This ranges from the flow cytometry MRD (FCM-MRD) detection in the bone marrow followed by more sensitive methods, polymerase chain reaction (PCR) for fusion genes or immunoglobulin/T-cell receptor gene rearrangements and next-generation sequencing (NGS). Still, some molecular techniques are more difficult to implement in the clinical scenario because of lack of standardization, costs, or absence of a specific target (6,7). It has been shown that MRD assessment has a big impact not only in determining the prognosis of ALL patients, but also in tailoring the therapeutic management, this strategy being used by multiple ALL protocols (8)(9)(10)(11). Nevertheless, MRD might not be sufficient in all subtypes of patients with the need of assessing additional disease characteristics as is the case of cytogenetic risk (12,13). Relapse-free survival (RFS) and overall survival (OS) are influenced by the disease itself and by the complications during therapy. Despite the progress in treating this disease, ∼20% of patients still relapse (14). Thus, the aim of the current study is to determine the results of BFM ALL IC 2009 in our clinical center, showing a real-life scenario of childhood ALL. Patients This is a retrospective study that included all newly diagnosed ALL patients between January 2016 and December 2018 in the Department of Hematology and Stem Cell Transplantation, Fundeni Clinical Institute, Bucharest, Romania, with followup until December 2019. Children under the age of 1 and with L3 morphology or bilineal/biphenotype ALL were excluded. The legal guardians signed an informed consent prior to the enrollment. The study was in accordance with the declaration of Helsinki and received approval from the ethical committee from the Fundeni Clinical Institute, Bucharest, Romania (6323/04.02.2020). Therapy followed the BFM ALL IC 2009 treatment plan. MRD measurement was performed using a 10-color flow cytometry analyzer (Navios, Beckman Coulter). We identified abnormal expression of immunophenotypic markers defined as leukemia-associated aberrant immunophenotype, using a combination of eight markers and reaching 10 −4 level of sensitivity. Prednisone response was defined as day 8 absolute blast count under 1000/µL. It must be mentioned that day 8 was preceded by 7 days of prednisone and one dose of intrathecal methotrexate on day 1. Bone marrow status was defined as M1 if <5% blasts were found on bone marrow aspirate, M2 ≥5% and <25% blasts, M3 ≥25%. Day 15 FCM-MRD groups were defined as under 0.1%, 0.1%−1%, 1%−10%, 10% or more. The previously mentioned intervals are opened to the right and closed to the right. Day 33 FCM-MRD groups were defined as under 0.05 and 0.05% or more. We defined relapse-free survival (RFS) as interval of time from diagnosis to relapse of any kind. We defined NRM as the interval of time from diagnosis to death of patients that did not relapse; in this case, the time of relapse was right censored. Causes of death were documented: death in induction, treatment, and non-treatment-related mortality. We defined OS as the interval time from diagnosis to death of any cause. Induction-related death was defined as death before day 33. Event-free survival (EFS) was defined as the interval of time from diagnosis to either death or relapse. Data Analysis Data analysis was performed using R 3.5.3. Categorical variables were represented as absolute value (percentage). Contingency tables were analyzed using the Fisher test. The Shapiro test and histogram visualization were used to assess the normality of the distribution. Normally distributed variables were represented as mean ± standard deviation, and non-normally distributed variables were represented as median (quartile 1, quartile 3). Differences between two normally distributed groups were assessed using the t-test. Differences between two non-normally distributed groups were assessed using the Mann-Whitney-Wilcoxon test. Univariate survival analysis was performed using a univariate Cox proportional hazards model. Variables that reached a p-value under 0.1 in the univariate Cox proportional hazards model were further used in the multivariate Cox proportional hazards model with the exception of the case in which both morphologic bone marrow involvement and FCM-MRD of the same day reached the inclusion criteria, in which case we selected only the FCM-MRD, considering their known association. If both day 15 and day 33 FCM-MRD reached the inclusion criteria, the day with the lowest p-value was included in the multivariate analysis. The risk group was not included in the multivariate analysis considering that it is composed of other variables that would be included in the multivariate model. In the case in which there was no event in one of the selected groups and the Cox proportional hazards model was not suitable, we used the log-rank test to generate a p-value and interpreted the Kaplan-Meyer curves to determine the direction of the effect. A p-value under 0.05 was considered statistically significant. RESULTS We have included 133 childhood ALL patients in the current study with the general characteristics presented in Table 1. The median follow-up of the cohort was of 810 (490, 1076) days. None of the 103 (77.44%) patients evaluated at day 78 presented MRD positivity. Figure 1 shows an overview of the cohorts and the main endpoints. It can be observed on the Kaplan-Meyer curves from Figure 2 that, for day 15 FCM-MRD, the curves of groups under 0.1% and between 0.1 and 1% highly overlap, and so do the curves between 1% and 10% and 10% or more (for OS and EFS). Thus, this represents the reason for dichotomizing the analysis using day FCM-MRD of 1% as the cutoff. Probably because of the small number of cases that died before day 33, none of the variables showed association with induction-related death (Supplementary Table 1). In the RFS univariate analysis, there was a negative impact of older age, T-ALL compared to B-ALL, leukocytes over 100 × 10 9 /L, poor prednisone response, high risk group, day 33 morphologic presence of disease, and FCM-MRD over 0.05% (Supplementary Table 2). In the multivariate analysis, none of the included variables were shown to be independently associated with relapse (Supplementary Table 3). In the NRM univariate analysis, the following variables presented a negative impact: female sex, poor prednisone response, high risk group, day 15 M3 bone marrow, day 33 morphologic presence of disease, and day 33 FCM-MRD over 0.05% (Supplementary Table 4). In the multivariate analysis, 1% and between 0.1 and 1% highly overlap, and so do the curves between 1% and 10% and 10% or more (for OS and EFS). Thus, this represents the reason for dichotomizing the analysis using day FCM-MRD of 1% as the cutoff. CIR = cumulative incidence of relapse. older age became a risk factor, and poor prednisone response and day 33 FCM-MRD over 0.05% maintained a poor prognosis association (Supplementary Table 5). There was no association found between day 15 or day 33 FCM-MRD and either hemoglobin or platelet count at diagnosis. In the OS univariate analysis, there was a negative impact of female sex, poor prednisone response, high risk group, day 15 M3 bone marrow, day 15 FCM-MRD over 1%, day 33 morphological disease, day 33 FCM-MRD over 0.05% (Supplementary Table 6). In the multivariate analysis poor prednisone response and day 33 FCM-MRD over 0.05% remained associated with poor OS (Supplementary Table 7). In the EFS univariate analysis, older age, severe thrombocytopenia, poor prednisone response, high risk group, M3 morphology on day 15, day 15 FCM-MRD over 1%, day 33 bone marrow morphologic disease, and day 33 FCM-MRD over 0.05% presented a negative impact on EFS (Supplementary Table 8). In the multivariate analysis, older age, poor prednisone response and day 33 FCM-MRD over 0.05% presented independent association with a worse EFS (Supplementary Table 9). We further assessed the association between the factors that presented as statistically significant in the multivariate analysis (Figure 3). In Figure 2, we presented Kaplan-Meyer curves on the influence of the day 15 FCM-MRD, day 33 FCM-MRD, and risk group on OS, RFS, and NRM. Considering that some of the patients presenting day 15 FCM-MRD of over 1% can transition to either day 33 FCM-MRD over or under 0.05% (Figure 3), whereas almost all patients with day 15 FCM-MRD under 1% tend to reach a day 33 FCM-MRD under 0.05, we decided to observe which day has the biggest influence in the subgroup of patients with day 15 FCM-MRD over 1% (Figure 4). Moreover, the importance of the sequential prednisone response followed either by day 15 FCM-MRD over 1% (Supplementary Figure 1) or day 33 FCM-MRD (Supplementary Figure 2) is assessed. In Figure 5, we presented a Sankey plot showing the association between the immunophenotype and day 15 and day 33 FCM-MRD. There was no association between immunophenotype and day 15 MRD (p = 0.668), but there was a statistically significant association between immunophenotype and day 33 MRD (p = 0.00159). It must be mentioned that in Supplementary Tables 2, 3 it can be observed that immunophenotype and day 33 FCM-MRD predict relapse, but both lose statistical significance in the multivariate analysis. This being said, the p-value for the day 33 FCM-MRD is lower than the one for immunophenotype in both the univariate and multivariate analyses, showing that day 33 FCM-MRD is, in the very least, able to replace immunophenotype regarding relapse, thus, making day 33 FCM-MRD relevant regardless of immunophenotype. DISCUSSIONS The research shows that day 33 FCM-MRD as well as the status of poor prednisone response are among the most important independent factors in predicting OS, NRM, and EFS. The results presented here are in accordance to the vast majority of the literature, showing the high prognostic impact of FCM-MRD (8)(9)(10)(11). Nonetheless, this data does not adjust subgroups to better tailor the impact of MRD, but this might also be caused by the relatively low cohort and the low numbers of some types of disease like early T precursor ALL (ETP-ALL) (12,13). Although prednisone response was introduced a long time ago, it still has prognostic relevance in the multivariate model (15,16). Regarding prednisone response, only 18 patients were poor responders, and thus, we could not assess all cases with a bad prognosis. As mentioned before, the risk groups were not included in the multivariate analysis considering they are assigned in dependence on other factors included. Nonetheless, the prognostic stratification of the risk groups was similar to the presented literature with SRG and IRG Kaplan-Meyer curves being close together and HRG having a highly worse outcome. T-ALL presented a higher risk of relapsing compared to B-ALL in the univariate analysis-a fact that is in accordance with the known published literature (17,18). Nonetheless, this is not kept in the multivariate model, showing that, in our experience, FCM-MRD and prednisone can overrule the immunophenotype. Immunological classification is not to be removed from the clinical management considering the different biology of these diseases and, thus, different therapeutic management (17,18). Although female patients generally have a better outcome when compared to male patients (19), in the current study, we have observed that, in the univariate analysis, males had a favorable OS and NRM, impact that was not kept in the multivariate analysis. Also, induction-related death was not predicted by any variable, probably because of the low number of events included in this category. Although not statistically significant, it appears that day 33 FCM-MRD over 0.05% after a day 15 FCM-MRD >1% predicts a worse OS and RFS although this must be validated in larger cohorts. Moreover, we observed that prednisone poor response worsens the prognostic of a patient with day 15 FCM-MRD over 1%. Still, because of the low number of patients included in this subanalysis, there were no patients with day 15 FCM-MRD over 10% and poor prednisone response that relapsed at follow-up. For prednisone response and day 33 FCM-MRD, we report that the association between prednisone good response and FCM-MRD under 0.05% offers the best prognosis, the combination between prednisone poor response and day 33 FCM-MRD over 0.05 has the worst prognosis, and the curves generated by using the other two intermediate combinations offer intermediate survival curves that seemingly overlap. Interestingly, FCM-MRD was strongly correlated with NRM. NRM is high among patients with high levels of MRD. This subclass of patients underwent intensive chemotherapy with aggressive protocols, thus being severely immunocompromised and cytopenic. The early mortality for these patients is due to cerebral hemorrhage due to severe thrombocytopenia (in two patients) and sepsis (due to therapy-related infections in the rest). CONCLUSIONS In conclusion, prednisone response, day 15 FCM-MRD, day 33 FCM-MRD, and the risk group represent the most important factors that independently predict childhood ALL prognosis in the current study. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT The studies involving human participants were reviewed and approved by ethical committee from the Fundeni Clinical Institute, Romania. The patients/participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS L-ER, SP, and AndC contributed equally to the current manuscript and are all considered first author, having written the manuscript. All the other authors contributed to clinical data collection. AncC supervised the work.
2020-06-30T13:09:59.404Z
2020-06-30T00:00:00.000
{ "year": 2020, "sha1": "b6142e9ae0b99c52e965ad1fcd28657fbd7efe7e", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2020.00923/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b6142e9ae0b99c52e965ad1fcd28657fbd7efe7e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
246576964
pes2o/s2orc
v3-fos-license
Risk-Neutral Market Simulation We develop a risk-neutral spot and equity option market simulator for a single underlying, under which the joint market process is a martingale. We leverage an efficient low-dimensional representation of the market which preserves no static arbitrage, and employ neural spline flows to simulate samples which are free from conditional drifts and are highly realistic in the sense that among all possible risk-neutral simulators, the obtained risk-neutral simulator is the closest to the historical data with respect to the Kullback-Leibler divergence. Numerical experiments demonstrate the effectiveness and highlight both drift removal and fidelity of the calibrated simulator. Introduction There has been growing interest in the application of reinforcement learning methods to financial markets. In many domains, a significant difficulty is the the availability of sufficient historical data to train reinforcement learning agents on -for example taking daily data for an equity underlying over ten years amounts to only a few thousand samples, making any algorithm prone to overfitting and lacking robustness. To address this data scarcity challenge, there has been much work in the area of using generative machine learning models to simulate realistic samples from the same distribution as the historical market data. To name a few: (Arribas, Salvi, and Szpruch 2020;Buehler et al. 2020;Wang 2021, 2022;Cuchiero, Khosrawi, and Teichmann 2020;De Meer Pardo, Schwendner, and Wunsch 2021;Gierjatowicz et al. 2020;Ni et al. 2020Ni et al. , 2021Wiese et al. 2019Wiese et al. , 2020. In this paper, we focus on the simulation of equity option markets of a single underlying. Through an effective parametrization of the market in terms of discrete local volatilities (Buehler and Ryskin 2016), we are able to guarantee the absence of static arbitrage from the market simulator. However, a major difficulty in the simulation of equity option markets is the presence of statistical arbitrage -that is, trading strategies which, starting from an empty initial portfolio, have positive expectation. This poses a particular challenge to the robustness of reinforcement agents trained * Opinions expressed in this paper are those of the authors, and do not necessarily reflect the view of J.P. Morgan. on simulated data, for the purposes of hedging and risk management of a derivatives portfolio. In the presence of statistical arbitrage, an agent's proposed action will be not only a hedge, but also an additional component that is independent of the portolio it is hedging and purely seeking profitable opportunities in the market, which are particularly sensitive to errors in estimating the drift (conditional expectation) of the tradeable instruments. It is therefore desirable to build models which are free from such statistical arbitrage opportunities. In the absence of trading constraints, such as transaction costs, removing statistical arbitrage is equivalent to simulating from a risk-neutral measure -a distribution under which the (discounted) prices process of all traded instruments are martingales: E[X t+1 |F t ] = X t . Simulating from such a martingale measure is a long-standing challenge in quantitative finance, with the classic approach being to specify a suitable parametric model for the underlying under the risk-neutral measure and calibrate parameters to historical data. These models are, however, often motivated more by tractability than expressiveness, and are often limited to single underlyings, not to the joint distributions of an underlying and options written on it. Therefore, we take a different approach in this paper and demonstrate how machine learning methods can be used for direct simulation under risk-neutral measures. We build on previous works on market simulation via flow-based models, first introduced in , and on methods to "remove the drift" in a market by solving a utility optimization problem (Buehler et al. 2022), in order to create a new data-driven method for direct simulation from a riskneutral measure, given historical samples from the physical measure. The use case and advantages of this approach includes for providing training data for a Deep Hedging algorithm to construct hedging strategies for portfolios of derivatives, by trading in hedging intruments consisting of an underlying and vanilla options. By training the algorithm with a risk-neutral simulator, we can remove the statistical arbitrage component, leaving a hedge that is robust against estimation errors of the drift (Buehler et al. 2022). We emphasise that the approach presented in this paper is a broad framework to construct a risk-neutral market simula- tor from real equity option market data. Our approach is not limited to the application of normalizing flows. Normalizing flows are solely used for the approximation of a conditional generative density. Background In the following subsections, we introduce the three main building blocks which are required to be able to construct a risk-neutral market simulator. We begin by introducing neural spline flows for approximating conditional generative densities. Afterwards, we review the construction of the equity option market simulator introduced in ) and derive the objective function. In the last subsection, we introduce entropy-based risk-neutral densities (Buehler et al. 2022). Neural spline flows for time series modeling Normalizing flows (Papamakarios et al. 2019) are diffeomorphisms 1 that are constructed by the means of neural networks. Being bijective allows the approximation of densities through the application of the density transformation theorem. In this paper, we will consider neural splines flows (Durkan et al. 2019b) which are a specific subclass of normalizing flows that recently got popular due to the ability to universally approximate densities and have an analytically tractable inverse. Recently, neural splines with linear interpolation were proposed to approximate generative densities for time series data . For completeness, we illustrate the construction of neural splines with linear interpolation for time series below. Assume that (X t ) t∈N ∼ p is a discretetime Markov process taking, without loss of generality, values in [0, 1] d . Furthermore, let T η = (T η,1 , . . . , T η,d ) : increasing map in the second component (cf. (Bogachev, Kolesnikov, and Medvedev 2005)) parametrised by η ∈ H. Then the generated process can be defined for j = 1, . . . , d as is a uniformly distributed random variable on the unit hypercube. Since the 1 Differentiable and bijective functions. function T η is assumed to be bijective, the density transformation theorem can be applied for j = 1, . . . , d to obtain the conditional density induced by is the latent variable conditional on the state x t and the next day partial state x t+1,:j−1 , and T η,j denotes the derivative of T η,j with respect to u. To make the derivative analytically tractable while maintaining monotonicity in T η,j , j = 1, . . . , d different interpolation schemes have been proposed (Durkan et al. 2019a,b). In this paper, we use linear interpolation and define the conditional density for a fixed number of bins B ∈ N as Neural splines approximate the conditional density (p k ) k with with a neural network taking values in R B . The output is then transformed via the softmax function and normalized to obtain a density. Due to the tractability of the conditional density the flow T η can be approximated using conditional Kullback-Leibler (KL) divergence, or equivalently, the negative log-likelihood (NLL) Note that here we use a conditional version of the KLdivergence since our objective is to approximate the conditional density of the market process. In practice, only a single time series (x t ) T t=1 ∼ p may be available. In this case, (1) will be approximated via Monte Carlo: Equity option market simulation In this subsection, we briefly outline the construction of the market simulator. For an in-depth introduction we direct the reader to . Let (Ω, F = (F t ) t∈N , P) be a filtered probability space. We refer to the probability measure P as the physical probability measure and by p we refer to the assosciated physical density. Denote by X = (S, C) : Ω × N → R >0 × R mn >0 an adapated discrete-time Markov process, i.e. we assume for any s, t ∈ N, t > s that The first component S = (S t ) t∈N represents the spot price of the underlying which we assume to take the form The second component C = (C t ) t∈N represents an mndimensional grid of call prices defined on a floating grid of time to maturities T = (τ 1 , . . . , τ m ) and relative strikes K = (k 1 , . . . , k n ) defined around the unit forward. Thus, the call price grid at any time t ∈ N is defined for i = 1, . . . , m and j = 1, . . . n so that C t,(i−1) * n+j is the price of the option with payoff (S t+τi /S t − k j ) + . In what follows we refer to X as the market process. Discrete local volatilities In order to guarantee the absence of static arbitrage (riskless profits) realized grid prices c t ∈ R mn >0 need to satisfy ordering constraints such as nonnegativity, monotonicity in time and convexity in strike (cf. (Gatheral and Jacquier 2014)). We therefore represent the grid prices by discrete local volatilities (DLVs) (Buehler and Ryskin 2016) as an arbitrage-free parametrisation of the considered price grid. 2 DLVs take a local volatility-inspired parametrisation and can be seen as a discrete version of Dupire's famous local volatility (Dupire 1994). Most importantly, the mapping from non-negative DLVs to the call price grid is a bijective map Φ : R mn >0 → R mn which will be necessary to construct the manifold flow (2). We denote the unique corresponding grid of DLVs by Σ t = Φ −1 (C t ). Interpolating the call price grid The market process X was constructed using floating grid prices to obtain a stationary representation of the call prices. In order to obtain 2 No static arbitrage is a hard requirement: if there exists static arbitrage under P, then no equivalent risk-neutral measure exists. real-world prices with fixed expiries and strikes we interpolate the floating price grid using a bilinear interpolation 3 in the prices. For any t ∈ N denote the bilinear interpolated grid at maturity τ ∈ [min(T ), max(T )] and relative strike k ∈ [min(K), max(K)] asC t (τ, k). Then the price of a call bought on day t at maturity-strike (τ, k) ∈ T ×K has a price on day t + 1 ∈ N of Note that a spot move s t+1 /s t greater than k/ min(K) or smaller than k/ max(K) will not make the interpolation feasible due to the limited relative strike range. We treat this problem by introducing boundary relative strikes and maturities and interpolate to the option's intrinsic value (Buehler and Ryskin 2016). Using the option's interpolated value we can compute the difference in price of the call bought at time t at with maturity-strike pair (τ, k) as Manifold assumption We assume grid prices c t ∈ R mn >0 to have support on a low-dimensional manifold. More precisely, we assume that there exists a latent dimension l mn, an injective map ψ : R l → R mn >0 such that for any price grid c t ∈ R mn >0 with positive density there exists a unique representation σ t such that c t = ψ(σ t ) holds Palmost surely. The manifold assumption is justified by the high observed correlation in DLV levels as well as their returns (cf. ). Under the call price manifold assumption we can construct for any states x t , x t+1 ∈ Im(ψ) in the image set of ψ the manifold flow (Brehmer and Cranmer 2020;Gemici, Rezende, and Mohamed 2016) where where J ψ is the Jacobian of ψ. By leveraging (2) one can show that minimzing the NLL on the level of the spot and call prices (1) is equivalent to minimizing the NLL on the compressed state space This motivates our choice to approximate market simulators in the compressed space. Entropy-based risk-neutral densities A central concept in mathematical finance is that of a riskneutral probability measure, Q which is equivalent to P, under which the price of all tradeable instruments is the discounted expectation of the instrument's payoff. Throughout this article, we will assume for simplicity that the risk-free rate is zero which implies that X is a martingale under Q. It is well known that if the physical measure is free from arbitrage then such a risk-neutral measure must exist, although it will not in general be unique. To construct such a measure, recall that X represents instruments which are liquidly traded in a market. Consider a trading action a t ≡ a(X t ) ∈ R 1+mn , and a gains of taking this action over a single timestep of G(X t+1 , X t ) = a t · (X t+1 − X t ). Consider the strictly concave, strictly increasing utility function u(x) = (1 − e −λx )/λ -known as the exponential utility, or entropy. Let a * be a solution of the optimization problem and let G * be the associated optimal gains. Then we can define a measure Q via the conditional density, Using this density, we have the following result. Proposition 1. The conditional density q is an equivalent martingale density, i.e. E q [X t+1 |X t ] = X t . Furthermore, q minimizes the KL divergence to p over all equivalent martingale densities, and Q is hence called the minimal entropy martingale measure (Frittelli 2000). The measure Q is one example of a broader class of utility-based risk-neutral measures, see (Buehler et al. 2022) for further details. Due to the positivity of the exponential function, w > 0, ensuring that P and Q are equivalent, and the concavity of u means that w is a decreasing function of G * . Furthermore, the change of measure is normalized, i.e. E(w(X t+1 |X t = x t )) = 1, ensuring that q. The financial interpretation is that by reweighting future states of the world proportional to their marginal utility under the physical measure, outcomes which were profitable under P become less likely under Q, resulting in a measure in which all profit opportunities have been removed and all instruments must have zero drift. Problem formulation Our aim in this paper is to calibrate a model density q θ that is close to the ground-truth risk-neutral density q with respect to the KL divergence, by using only samples from the physical density p. Note that our framework is very general, and applies to any density estimator, and in this paper we focus specifically on neural spline flows. Ignoring constants for clarity, our objective is therefore the following: Note that we minimize this objective by minimizing the inner conditional NLL for all statesX t , so the outer expectation can be taken over the equivalent physical measure. Furthermore, since we can neither evaluate nor sample from the target density q, we apply the change of measure transform above, so that this objective is equivalent to ( * ) Due to the assumption that the decoder ψ : R l → R mn >0 is injective the minimization problem ( * ) is equivalent to solving following the calibration problem on the reduced space (2). Further note that for any x t , x t+1 ∈ Im(ψ) the change of measure w(x t+1 |x t ) = q(x t+1 |x t )/p(x t+1 |x t ) on the observed space coincides with the change of measure on the reduced space: Hence, we may write our objective as a weighted NLL on the reduced space: In the real world, both calibration problems ( * ) and ( * * ) are not feasible. We only observe a single realization (x t ) t∈I sampled from the physical density p over some time horizon I := {0, . . . , T }. Furthermore, we do not observe the call price function ψ. We instead only have access to the empirical physical densityp(x t+1 |x t ) = t∈Ĩ δ (xt+1,xt) ((x t+1 , x t )) where δ denotes the Dirac delta function, and the corresponding empirical reduced physical density. This in particular makes learning the change of measure weights difficult when done directly from the empirical densities, since the optmization would be done on only single samples from each condition. To overcome these challenges, we use the following data augmentation method. Augmentation with the physical market simulator To get good estimates of the weights w(x t+1 |x t ) we augment the empirical distribution by first training a flow model density p η that minimizes the KL divergence to the physical density. We then train the model risk-neutral density to target the weighted model density. This is justified by the following proposition. Call 120 1.0 p q Figure 3: Empirical histograms of the spot price, 60D and 120D at-the-money call price sampled under the approximated real-world density p η (blue) and risk-neutral density q θ (orange). Proposition 2. Let p η be a model density for the physical probability measure, such that D KL (p p η ) = 0 and let q η = wp η . Let T θ be a Markovian market simulator such that the induced density q θ satisifies D KL (q η q θ ) = 0. Then q θ = q and the market simulator is risk-neutral. Proof. Since zero KL divergence implies (almost sure) equality of the distributions, we have q θ = q η = wp η = wp = q. Hence for all t, we must have E q θ [X t+1 |X t ] = X t , which from the Markov assumption, implies that the simulator is risk-neutral. Therefore, we may train the risk-neutral market simulator on samples from the physical simulator via the NLL-based objective: Change of measure approximation We further use samples from the physical simulator to approximate the change of measure weights by solving (3). This is equivalent to the convex optimization where dX t+1 = X t+1 − X t denotes the change in the instrument's value. The Hessian is given as ∇ 2 a L(a) = λ 2 E(exp(−λa·dX t+1 )dX t+1 ⊗dX t+1 |X t = x t ) Practically, the utility loss function is approximated via MC: is a sample generated using the physical market simulator. The weights w(x t+1 |x t ) are them computed directly using the converged a * . An example of the weights obtained across different samples is shown in Figure 4. The distribution of the weights illustrated here indicates another clear advantage of the approach: sampling efficiency. Since many of the weights will be less than 1, by applying the change of measure on the conditional density rather than the joint of the entire path, we avoid degenerate weights, and therefore avoid using sample paths from p with very low density under q. Putting these steps together, we summarize the approach in Algorithm 1. Numerical results In this section, we demonstrate the efficacy of our flowbased risk-neutral market simulator to generate samples which are martingales but nonetheless demonstrate similar properties to the physical distribution. We consider the Eurostoxx 50 from January 2011 to December 2020 for a total of 2543 business days, and simulate spot and DLVs / options defined at relative strikes K = (0.7, 0.75, . . . , 1.0, . . . , 1.25, 1.3) and maturities T = (60, 120) days. The corresponding set of DLVs as well as the normalized Eurostoxx 50 spot price are displayed in Figure 1. For clarity, we then simulate prices of three tradeable instruments: spot and the at-the-money call options with 60D and 120D expiry. Approximating the simulator under P The real-world market simulator was constructed by following . We first approximated the autoencoder to reduce the dimensionality of the 26-dimensional grid of DLVs to a three-dimensional representation displayed in Figure 2. Afterwards, the flow was calibrated on the compressed rep- Figure 4: Distribution of the change of measure w(X t+1 |X t = x t ) calibrated on the considered three instruments. Evaluating the drift We test the risk-neutrality of the simulator in two ways. First, we generate a large number N = 262.144(= 2 18 ) of samples from both p η (·|x t ) and q θ (·|x t ) and compare the MC drift 1/N j x (j) t+1 − x t for all three instruments, between the two simulators. Table 1 shows clearly the reduction in drift in the marginal distributions, in both absolute and relative terms. Secondly, we check the joint, we check that there are no profitable trading opportunities under q θ . Since u(x) ≤ x for all x, it follows that E(u(a · dX)) ≤ E(a · dX) and hence under a risk-neutral measure, the optimal action for the exponential utility is a * = 0. Figure 5 shows the distribution of PnL obtained by optimal actions against different spot shifts. Specifically we see how the optimal actions under p η are able to profit from larger spot moves but make losses on small moves. This can be understood by comparing the Black-Scholes implied volatilities 4 of the options with the realized volatility of the spot under p η . In this case, the options trade at implied volatilities of 21.3% and 23.1% whereas the realized volatility for the spot is 25.4%, making options cheap, and giving rise to a positive drift in the options -the optimal trading action being to buy options and hedge short with the spot. Under the risk-neutral density, this drift is corrected, as can be seen in the subtle reduction in the volatility in the spot returns demonstrated in Figure 3. The end result is that the optimal actions under q θ are effectively zero, with no PnL or utility obtainable. 0.10 0.05 0.00 0.05 0.10 Spot return, ds t + 1 0 5 10 PnL PnL vs Spot Return p q Figure 5: Scatter plot comparing the spot move ds t+1 against the PnL a * · dX t+1 under the approximated realworld density p η and the approximated near risk-neutral density q θ . Evaluating the conditional risk-neutral density in the compressed space We evaluated the conditional approximated real-world and risk-neutral density p η (·|y t ) and q θ (·|y t ) by applying a kernel estimator on the generated samples in the compressed space (see Figure 6). Visually, we can observe that the change of measure alters the tails of the distribution. In particular, the tails of the spot distribution become narrower which resulted in a clear reduction in utility and PnL of the optimal volatility spread strategy a * (see Figure 5). Conclusion In this paper, we have outlined a general framework for data-driven simulation of financial market data from a riskneutral measure, by combining a calibrated market simulator for the physical measure P, and a change of measure found by solving a convex optimization on the physical simualator. We then train a simulator for the risk-neutral measure Q using a weighted negative log likelihood objective. We demonstrate the effectiveness of the method by using neural spline flows to estimate construct a simulator for markets of equity underlying and options and show that the resulting simulator is indeed risk-neutral, while still retaining relevant properties of the historical data. We leave it as future work to construct the full risk-neutral market simulator from which one can sample and expand the set of tradable instruments to wider floating grids of relative strikes and maturities. Another direction of research is the application of manifold flows to guarantee the injectivity of the decoder by construction (Brehmer and Cranmer 2020).
2022-02-06T16:31:37.029Z
2022-02-28T00:00:00.000
{ "year": 2022, "sha1": "f3a29b0f5741172c04594bc1f764e66c765ad694", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "2e692a10efdce58910d5fb8489e70484140d266c", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Computer Science", "Economics", "Mathematics" ] }
225460756
pes2o/s2orc
v3-fos-license
The Screening of Resistance against Meloidogyne graminicola in Oats : Including pest resistance elements against the major local concern is naturally important in the breeding process. Oat ( Avena sativa L.) has been recently reintroduced into Taiwan as a winter alternative forage crop, and its agronomic performance has been evaluated at di ff erent locations in the country. This study examined the resistance to root-knot nematode, Meloidogyne graminicola , in four oat ( Avena sativa L.) breeding lines of mass planting potential for winter in Taiwan. The host attraction level to the nematode, and the penetration and reproduction ability of nematode towards host roots were evaluated by chemotaxis assay, root staining assay, root galling, and nematode extraction. Based on the gall index (GI) and multiplication factor (R), the resistance of each oat line was evaluated. At 24 h postinoculation, second-stage juvenile (J2) nematodes appeared most attracted by oat breeding lines UFRGS136104-3 and UFRGS136119-2. The number of J2s successfully penetrated into the two breeding lines were also high. However, at 40 days postinoculation, observation of the oats in the newly developed culture bag nematode-inoculation system revealed that the amount of root galls and 2nd generation nematodes were significantly higher in line LA08085BS-T2 than in other lines. In sum, oat breeding line UFRGS136104-3 was highly resistant to M. graminicola by inhibiting the gall formation and nematode reproduction, while UFRGS136106-3 and UFRGS136119-2 showed relatively weak resistance and oat line LA08085BS-T2 would be a moderately susceptible host to M. graminicola , with high numbers of root gall formation. The outcome of this study provides ground information for nematode-resistant oat cultivar breeding. Introduction The rice root-knot nematode, Meloidogyne graminicola, is a common plant-parasitic nematode to the Poaceae crops globally. The main hosts of M. graminicola in the family of Poaceae includes rice, wheat, barley, oat, and sorghum [1]. Research has shown that this nematode can result in up to 73-80% yield loss on rice whether in upland or intermittently flooded condition when the nematode population is high [2,3]. M. graminicola is widely distributed in all agricultural regions in Taiwan. It completes one life cycle within 19-21 days at 28-30 • C and causes an average of 80% disease rate in rice paddy fields [4]. The aboveground symptoms of M. graminicola infections often appear as patch patterns in a field, with the infected plants showing growth reduction, less vigor, yellowing and curling of leaves, wilting, and poor yield [5]. Underground, the 2nd juvenile stage of the nematode invades rice roots tips and the infected root tips would swell and form horseshoe-like hooks, the characteristic disease symptom of this nematode, as the female develops and lays eggs within the roots. Management of the M. graminicola in Taiwan is difficult since no nematicide is currently approved on the Poaceae crops for M. graminicola because of concerns for human and environment health [6]. Long-term crop rotation or bare fallow without any weed in field were shown to be effective in nematode control with the use of proper crop sequence [7] but is very impractical to most growers. Although recent progress had been made in biological control research [8], planting M. graminicola-resistant cultivars still remains the most promising option for future effective and economic control. Oat (Avena sativa L.) is an annual grass grown worldwide. It is mainly cultivated for grain use in temperate regions between 45-65 • of the northern hemisphere and 20-46 • in the southern hemisphere [8]. Meanwhile, in the tropics and subtropics regions, oats are grown as forage for local use in some mountainous area or during cooler seasons [9]. As forage, oat is a high biomass crop with good nutrition composition profile that contains high crude protein and water-soluble carbohydrate and could be used as fresh feed, hay, or plant material for silage. Recent years, under the government's support and promotion, oat has been reintroduced to Taiwan as a winter forage crop [10,11]. Natural nematode resistance has been reported in oats to some root-knot nematodes [12,13] and to cereal cyst nematode Heterodera avenae [14]. Three oat cultivars, Sniper, Tachiibuki, and Haeibuki, have been tested against three root-knot nematode species, M. incognita, M. arenaria, and M. hapla. Oat cultivars Sniper and Tachiibuki showed suppressive ability in the reproduction and development of these 3 Meloidogyne species in contrast to cultivar Haeibuki [12,13]. However, no breeding or experiments in oats for M. graminicola resistance have been reported. Given the fact that oat planting has gradually gained popularity in Taiwan, along with the raising economic risk caused by nematode infection, the aim of this study was to examine the disease resistance to M. graminicola among the putative oat lines that are currently used for winter forage crop breeding in Taiwan. Nematode Inoculum The nematode M. graminicola was first collected from a field with rice cropping history in Taiwan and identified morphologically through microscopic anatomy observation and body measurements with the deMan formula [15] and confirmed molecularly with ribosome gene sequences [8]. A single egg population was maintained on rice (cultivar Taoyuan No. 3) at 28 • C in a growth chamber. Prior to each experiment, eggs were extracted by macerating oats roots for 2 min in 1% NaClO [16] and placed in a Petri dish that contains water for hatching. The 2nd stage juveniles (J2) were collected on the same day of experiment. Plant Materials Four oat breeding lines (LA08085BS-T2, UFRGS136104-3, UFRGS136106-3, and UFRGS136119-2) from Quaker Oat International Nursery (QION) 2015 and 2016 were tested in this study. LA08085BS-T2 was a breeding line from the oat breeding program of the University of Florida, derived from a cross of a Brazilian line and a Florida line (UFRGS 046048-1 F6/FL0206FSB-34-S1-B-S1). The UFRGS series were from the breeding program of Universidade Federal do Rio Grande do Sul, Brazil, where UFRGS136104-3 and UFRGS136106-3 had the same pedigree but were from two independent crosses and UFRGS136119-2 was derived from a cross between UFRGS 988012-1 and UFRGS 995088-3 (Supplementary Table S1). We have selected these four oat lines for their good adaptation to subtropical climates. Thirty-five seedlings of each oat line were used in this study. Attraction and Penetration Study To quickly examine the oat lines for potential resistance to M. graminicola, the chemotaxis assay was conducted with oat seedlings. The seed of selected oat lines were sterilized with 1% sodium hypochlorite (NaClO) solution for 3 min and then thoroughly washed with distilled water. After soaking in distilled water for 2 h, the seeds were set to germinate in the dark for 24 h at 27 • C. The seedlings were then moved to 24 • C with 12-12 light/dark condition for 2 days before use. For evaluating the attraction of oats to M. graminicola, 200 J2 was added together with 18 mL of 23% Pluronic gel (Pluronic F-127, Sigma-Aldrich, St. Louis, MO, USA) in a petri dish (9 cm diameter) by shaking gently for uniform distribution. Then, one oat seedling was put in the gel in each petri dish. Throughout the process, the petri dishes were placed on ice to retain the liquid state of gel. The petri dishes were then transferred to room temperature for the gel to set. The number of nematodes within 1 mm diameter of the oat root tip were recorded at 8 and 24 h after inoculation. Five replications were set up for each tested oat line. To determine the penetration rate of an oat breeding line, the total number of J2 which had completed root penetration at 24 h after inoculation were counted after staining with acid fuchsin, as described by Bybd et al. [17]. In brief, the roots were washed and placed in 1.5% NaClO solution for 4 min with occasional agitation. After removing residual NaClO, the segments were stained by boiling for 30 s in staining solution. The staining solution contains 1 mL stain (3.5 g acid fuchsin, 250 mL acetic acid, and 750 mL distilled water) with 30 mL water. The roots were rinsed in running water and then placed in acidified glycerin, heated to boiling, and then cooled to room temperature. The stained J2 inside the roots were counted under a stereoscopic binocular microscope. Five replications were set up for each treatment. Inoculation Study In order to observe and compare the disease development difference among oat breeding lines, a nematode-inoculation culture bag system was used. Three fresh germinated seeds of each breeding line were placed in the folding line of one culture bag and incubated in the growth chamber at 24 • C for 3 days. 200 J2 was inoculated to the 3-day-old seedlings as described by Lasserre et al. [18]. The disease severity and nematode reproduction ability were used to evaluate the resistance of oat genotypes. Plants with no nematode inoculation were used as controls. Five replicates were set up for each treatment. The numbers of root galls represent the disease severity in this trail. The gall index (GI) was rated at 40 days postinoculation (dpi) with a five-scales system: 1 = no galling; 2 = 1-10 galls; 3 = 11-30 galls; 4 = 31-100 galls; and 5 = >100 galls/root system [18]. In addition, the culture bags were observed every day to record the occurrence of the first gall on each root system. All root was freshly weighted at the end of the experiment. The number of eggs and the juveniles of the 2nd generation were used to evaluate the nematode reproduction ability. At the end of the trial, the total amount of J2 and eggs were collected and counted. With the initial population density (Pi) as 200, the J2 amount at the beginning of the inoculation, the J2 collected from the water in culture bags were used to calculate the final population density (Pf). Then, the multiplication factors (R) were calculated (R = Pf/Pi). Eggs were harvested by macerating oat roots in 1% NaClO for 2 mins, and the eggs were transferred into sterile water for counting as previously described [19]. Evaluation of Resistance and Susceptibility of Oat Lines The gall index (GI) and multiplication factor (R) were used to infer the resistance/susceptibility of oat lines to M. graminicola [20,21]. The modified rating scale is shown in Table 1. Data Analysis For attraction and penetration study, the numbers of J2 were analyzed through the analysis of variance (ANOVA) and followed with the Duncan's Multiple Range Test (DMRT) to measure the differences between means. For inoculation study, all data of root gall formation, nematode reproduction, and plant weight were analyzed by ANOVA and means were separated by DMRT. All the statistical analyses were performed using SAS 9.4 software (SAS Institute Inc., Cary, NC, USA). Significant differences were determined by p < 0.05. Attraction and Penetration Study Test oat lines showed variable attraction to J2. After 8 h, there were the most amounts of J2 observed around the root tips of UFRGS136104-3, which were significantly (p < 0.05) greater than the number of J2 around the root tips of lines LA08085BS-T2 and UFRGS136106-3, while no significant difference was found between UFRGS136119-2 and the other three genotypes ( Figure 1A). After 24 h, significantly fewer J2 were found around UFRGS136119-2, followed by UFRGS136104-3, LA08085BS-T2, and UFRGS136106-3 ( Figure 1B). Data Analysis For attraction and penetration study, the numbers of J2 were analyzed through the analysis of variance (ANOVA) and followed with the Duncan's Multiple Range Test (DMRT) to measure the differences between means. For inoculation study, all data of root gall formation, nematode reproduction, and plant weight were analyzed by ANOVA and means were separated by DMRT. All the statistical analyses were performed using SAS 9.4 software (SAS Institute Inc., Cary, NC, USA). Significant differences were determined by P < 0.05. Attraction and Penetration Study Test oat lines showed variable attraction to J2. After 8 h, there were the most amounts of J2 observed around the root tips of UFRGS136104-3, which were significantly (P < 0.05) greater than the number of J2 around the root tips of lines LA08085BS-T2 and UFRGS136106-3, while no significant difference was found between UFRGS136119-2 and the other three genotypes ( Figure 1A). After 24 h, significantly fewer J2 were found around UFRGS136119-2, followed by UFRGS136104-3, LA08085BS-T2, and UFRGS136106-3 ( Figure 1B). After acid fuchsin staining, J2 could be clearly found inside the root tips of oats at 24 h after inoculation ( Figure 2). There were significant differences (P < 0.05) in J2 penetration observed after 24 h ( Figure 1C). Four to eight times the number of J2 were found in the root tips of lines UFRGS136104-3 and UFRGS136119-2 compared to lines LA08085BS-T2 and UFRGS136106-3 ( Figure 1C). Combining the number of J2 around and inside the root tips of each oat line after 24 h, a significantly (P < 0.05) greater number of J2 were attracted and penetrated to the root tips of lines UFRGS136104-3 and UFRGS136119-2 than those of lines LA08085BS-T2 and UFRGS136106-3 ( Figure 1D). After acid fuchsin staining, J2 could be clearly found inside the root tips of oats at 24 h after inoculation ( Figure 2). There were significant differences (p < 0.05) in J2 penetration observed after 24 h ( Figure 1C). Four to eight times the number of J2 were found in the root tips of lines UFRGS136104-3 and UFRGS136119-2 compared to lines LA08085BS-T2 and UFRGS136106-3 ( Figure 1C). Combining the number of J2 around and inside the root tips of each oat line after 24 h, a significantly (p < 0.05) greater number of J2 were attracted and penetrated to the root tips of lines UFRGS136104-3 and UFRGS136119-2 than those of lines LA08085BS-T2 and UFRGS136106-3 ( Figure 1D). Inoculation Study The first gall formation of lines LA08085BS-T2, UFRGS136106-3, and UFRGS136119-2 could be observed at 10 dpi, 3 dpi, and 6 dpi, respectively. At 40 dpi, the typical symptoms caused by M. graminicola such as swellings and horseshoe-like hooks were observed on several root tips of tested oat lines, except UFRGS136104-3 ( Figure 2). No gall was observed in UFRGS136104-3 during the trial time ( Table 2). The total number of root galls, incompletely developed root galls, and completely developed root galls of line LA08085BS-T2 were all significantly higher (P < 0.05) when compared with UFRGS136106-3 and UFRGS136119-2 (Table 2). At 40 dpi, M. graminicola produced a significantly (P < 0.05) high number of eggs in LA08085BS-T2 (Table 2). Also, the number of J2 harvested in LA08085BS-T2 were more than UFRGS136104-3, UFRGS136106-3, and UFRGS136119-2 ( Table 2). The fresh root weight of oats was increased in lines LA08085BS-T2, UFRGS136106-3, and UFRGS136119-2 at 40 dpi, but there were no significant differences in fresh root and leaf weight of inoculated oat lines compared with control plants (Table 3). Values are mean ± standard error (SE) of 5 replicates. Means within columns followed by the same letter are not significantly different (P ≥ 0.05). Inoculation Study The first gall formation of lines LA08085BS-T2, UFRGS136106-3, and UFRGS136119-2 could be observed at 10 dpi, 3 dpi, and 6 dpi, respectively. At 40 dpi, the typical symptoms caused by M. graminicola such as swellings and horseshoe-like hooks were observed on several root tips of tested oat lines, except UFRGS136104-3 ( Figure 2). No gall was observed in UFRGS136104-3 during the trial time ( Table 2). The total number of root galls, incompletely developed root galls, and completely developed root galls of line LA08085BS-T2 were all significantly higher (p < 0.05) when compared with UFRGS136106-3 and UFRGS136119-2 (Table 2). At 40 dpi, M. graminicola produced a significantly (p < 0.05) high number of eggs in LA08085BS-T2 (Table 2). Also, the number of J2 harvested in LA08085BS-T2 were more than UFRGS136104-3, UFRGS136106-3, and UFRGS136119-2 ( Table 2). The fresh root weight of oats was increased in lines LA08085BS-T2, UFRGS136106-3, and UFRGS136119-2 at 40 dpi, but there were no significant differences in fresh root and leaf weight of inoculated oat lines compared with control plants (Table 3). Resistance/Susceptibility of Oats Evaluation The resistance/susceptibility of tested oat lines are summarized in Table 4. LA08085BS-T2 expressed as a moderately susceptible host to M. graminicola, which might not be easily invaded ( Figure 1) initially but the nematodes population could successfully establish feeding sites and form many root galls in the later stage. In contrast, UFRGS136104-3 was highly resistant to M. graminicola; Even with some J2 penetration in the roots (Figure 1), the nematodes did not survive, no feeding sites could be developed within, and therefore no gall was observed. UFRGS136106-3 showed resistance to M. graminicola, and the nematode population within was the smallest among the tested oat lines. Finally, UFRGS136119-2 exhibited resistance by possessed fewer root gall formation and led to ineffective nematode reproduction. Discussion Meloidogyne graminicola, rice root-knot nematode, is one of the limiting factors for the production of the Poaceae crops, yet little is known about the pathogenicity of M. graminicola to oats. In the present study, the resistance level of oat lines from forage crop breeding lines in Taiwan were revealed using petri dish chemotaxis assay, root-tip penetration assay, and the newly developed culture bag nematode-inoculation system. The experiment results not only filled the important gap knowledge for M. graminicola host range and disease development on root but also revealed the limitation of using the results of petri dish chemotaxis and root-tip penetration assays as resistance/susceptibility indicators. In the case of oat line UFRGS136104-3, many J2s were attracted to the root tip and successfully entered the root, but a strong resistance occurred at a later stage in plants that led to no gall formation. On the other hand, on oat line UFRGS136106-3, even though the first gall was observed very early in the trail, only gentle damage was caused by the nematode at the end of the experiment. The plant-parasitic nematodes can recognize specific chemical substances from the hosts by their sensory organs (e.g., amphids and phasmids), and it allows the nematodes to seek for food [22]. We found that the attraction to M. graminicola J2 of tested oat lines UFRGS136104-3 and UFRGS136119-2 were much higher than lines LA08085BS-T2 and UFRGS136106-3. However, the level of host attraction to J2 did not associate with the plant resistance level. Cabasan et al. [23] have observed the equal attraction to M. graminicola J2 between resistant and susceptible rice genotypes. Host penetration is a key step of root-knot nematodes to complete their life cycles, and the defense responses that they may encounter from host plants could be pre-penetration, post-penetration, or both [24]. Pre-penetration defenses include physical root barriers and plant defensive biochemicals to inhibit Meloidogyne spp. penetration [25,26]. Two types of post-penetration defenses of M. graminicola-resistant rice genotypes were previously described [27]. First, an early hypersensitive-response (HR)-like reaction caused cell necrosis to prevent nematode feeding. Second, a late response resulted in poor developments of giant cells. Our results indicated that there are different defense response expressions in oat lines UFRGS136104-3, UFRGS136106-3, and UFRGS136119-2. The reproduction results suggest that oat lines UFRGS136104-3 and UFRGS136106-3 expressed pre-penetration or early stage post-penetration resistance, while line UFRGS136119-2 exhibited late post-penetration resistance (Figure 3). Further gene expression studies on nematode associated molecular pattern (NAMP)-induced basal defense associated host genes or markers of effector triggered host defense pathways would support the observation. Agriculture 2020, 10, x FOR PEER REVIEW 7 of 10 were much higher than lines LA08085BS-T2 and UFRGS136106-3. However, the level of host attraction to J2 did not associate with the plant resistance level. Cabasan et al. [23] have observed the equal attraction to M. graminicola J2 between resistant and susceptible rice genotypes. Host penetration is a key step of root-knot nematodes to complete their life cycles, and the defense responses that they may encounter from host plants could be pre-penetration, postpenetration, or both [24]. Pre-penetration defenses include physical root barriers and plant defensive biochemicals to inhibit Meloidogyne spp. penetration [25,26]. Two types of post-penetration defenses of M. graminicola-resistant rice genotypes were previously described [27]. First, an early hypersensitive-response (HR)-like reaction caused cell necrosis to prevent nematode feeding. Second, a late response resulted in poor developments of giant cells. Our results indicated that there are different defense response expressions in oat lines UFRGS136104-3, UFRGS136106-3, and UFRGS136119-2. The reproduction results suggest that oat lines UFRGS136104-3 and UFRGS136106-3 expressed pre-penetration or early stage post-penetration resistance, while line UFRGS136119-2 exhibited late post-penetration resistance (Figure 3). Further gene expression studies on nematode associated molecular pattern (NAMP)-induced basal defense associated host genes or markers of effector triggered host defense pathways would support the observation. The resistance mechanism that lies in the M. graminicola oat line may not resemble the ones previously reported from other Poaceae plants. To date, 19 resistant genes to Heterodera avenae have been identified in wheat, barley, and oat [28][29][30][31][32]. Three flavone-C-glycoside compounds in methanolic root and shoot extracts of oats were found to be induced by Pratylenchus neglectus and H. avenae invasion and by methyl jasmonate [33]. Treatment with methyl jasmonate reduced invasion of both nematodes and increased plant mass, compensating for damage caused by the nematodes, and is attributed to the active flavone-C-glycoside [33]. In M. graminicola-resistant rice cultivars, the amount of nematode J2 with successful penetration was significantly lower, the nematode development was slower, their reproduction in the host was slower, significantly fewer galls were observed, and the size of mature females were significantly smaller [23]. However, in the inoculation experiment of this study, M. graminicola successfully penetrated the root of oat line UFRGS136104-3 with relatively high J2 amounts at 24 hours postinoculation but was not able to induce any gall or to reproduce. Based on the data obtained in the present research, there might be certain genetic properties for the resistances to M. graminicola existing in oat lines UFRGS136104-3, UFRGS136106-3, and UFRGS136119-2. Previous studies showed that the resistance genes in peppers (N, Me1, Me2, Me3, Me4, Me5, Me6, Mech1, and Mech2) and in tomato (Mi) regulated the plants abilities to defend against the penetration of Meloidogyne spp. and inhibited nematode reproduction [34][35][36][37]. Therefore, The resistance mechanism that lies in the M. graminicola oat line may not resemble the ones previously reported from other Poaceae plants. To date, 19 resistant genes to Heterodera avenae have been identified in wheat, barley, and oat [28][29][30][31][32]. Three flavone-C-glycoside compounds in methanolic root and shoot extracts of oats were found to be induced by Pratylenchus neglectus and H. avenae invasion and by methyl jasmonate [33]. Treatment with methyl jasmonate reduced invasion of both nematodes and increased plant mass, compensating for damage caused by the nematodes, and is attributed to the active flavone-C-glycoside [33]. In M. graminicola-resistant rice cultivars, the amount of nematode J2 with successful penetration was significantly lower, the nematode development was slower, their reproduction in the host was slower, significantly fewer galls were observed, and the size of mature females were significantly smaller [23]. However, in the inoculation experiment of this study, M. graminicola successfully penetrated the root of oat line UFRGS136104-3 with relatively high J2 amounts at 24 hours postinoculation but was not able to induce any gall or to reproduce. Based on the data obtained in the present research, there might be certain genetic properties for the resistances to M. graminicola existing in oat lines UFRGS136104-3, UFRGS136106-3, and UFRGS136119-2. Previous studies showed that the resistance genes in peppers (N, Me1, Me2, Me3, Me4, Me5, Me6, Mech1, and Mech2) and in tomato (Mi) regulated the plants abilities to defend against the penetration of Meloidogyne spp. and inhibited nematode reproduction [34][35][36][37]. Therefore, further plant physiological experiments along with genomic studies would be needed to reveal the M. graminicola-resistant mechanism in oats. The management of M. graminicola currently relies heavily on nematicides application. For long-term sustainable farming, incorporating the nematode-resistance quality into oat cultivar selection would be ideal to prevent disease occurrence in fields. This study demonstrated and provided essential information on the natural M. graminicola resistance for future crop breeding.
2020-08-20T10:09:32.855Z
2020-08-13T00:00:00.000
{ "year": 2020, "sha1": "52c621dd5772889cd33732a5b72ef7bd9a43e3a4", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2077-0472/10/8/352/pdf?version=1597298674", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "1e6128e8b3bc697015d9c070d5060cf382c4a15f", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
270961877
pes2o/s2orc
v3-fos-license
Magnesium Nanoparticles for Surface-Enhanced Raman Scattering and Plasmon-Driven Catalysis Nanostructures of some metals can sustain localized surface plasmon resonances, collective oscillations of free electrons excited by incident light. This effect results in wavelength-dependent absorption and scattering, enhancement of the incident electric field at the metal surface, and generation of hot carriers as a decay product. The enhanced electric field can be utilized to amplify the spectroscopic signal in surface-enhanced Raman scattering (SERS), while hot carriers can be exploited for catalytic applications. In recent years, cheaper and more earth abundant alternatives to traditional plasmonic Au and Ag have gained growing attention. Here, we demonstrate the ability of plasmonic Mg nanoparticles to enhance Raman scattering and drive chemical transformations upon laser irradiation. The plasmonic properties of Mg nanoparticles are characterized at the bulk and single particle level by optical spectroscopy and scanning transmission electron microscopy coupled with electron energy-loss spectroscopy and supported by numerical simulations. SERS enhancement factors of ∼102 at 532 and 633 nm are obtained using 4-mercaptobenzoic acid and 4-nitrobenzenethiol. Furthermore, the reductive coupling of 4-nitrobenzenethiol to 4,4′-dimercaptoazobenzene is observed on the surface of Mg nanoparticles under 532 nm excitation in the absence of reducing agents, indicating a plasmon-driven catalytic process. Once decorated with Pd, Mg nanostructures display an enhancement factor of 103 along with an increase in the rate of catalytic coupling. The results of this study demonstrate the successful application of plasmonic Mg nanoparticles in sensing and plasmon-enhanced catalysis. P lasmonic metallic nanoparticles (NPs) can sustain localized surface plasmon resonances (LSPRs), coherent oscillations of conduction electrons.LSPRs are driven by an incident oscillating electric field, i.e., light, and lead to enhanced, wavelength-dependent absorption, scattering, and local field enhancement.The latter is at the basis of the signal amplification in surface-enhanced Raman scattering (SERS) 1,2 and metal-enhanced fluorescence (MEF), 3,4 for instance.−7 Typical SERS substrates are Au-and Ag-based, 8 but recently, alternative plasmonic metals such as Cu 9 and Al 10 have gained attention for their different resonance ranges, biocompatibility, and lower cost than Au and Ag. 11Both Cu 12 and Al 13 have been explored as SERS substrates, reflecting the interest toward alternative materials.Mg is a biocompatible 14 and earth-abundant metal, with a plasmonic quality factor superior to that of Au and Cu at wavelengths below 500 nm, and to that of Al across the visible range. 11Mg's spontaneously formed oxide layer protects the metallic core against further oxidation by air and is not detrimental to its plasmonic properties; 15−17 appropriate coatings can further provide protection against oxidation in aqueous medium. 18Despite these qualities and recent advances, SERS from Mg NPs has yet to be reported. −23 This makes plasmonic NPs of interest for applications in light-enhanced catalysis.Examples of plasmonmediated catalytic reactions include CO 2 reduction, 24 dry reforming of CH 4 , 25 oxidation of NH 3 and CO, 26 C−F bond activation, 27 and CH 3 OH decomposition. 28As plasmon-driven reactions occur on the surface of the metals supporting LSPRs, they can also be monitored by SERS.For instance, the decarboxylation of 4-mercaptobenzoic acid (4-MBA), 29,30 and the reductive coupling of 4-nitrobenzenethiol (4-NBT) to 4,4′dimercaptoazobenzene (DMAB) 31,32 are common probes for examining a plasmonic substrate's ability to drive chemical transformations. Photocatalytic performance can be improved by decorating a plasmonic metal with small amounts of catalytically active albeit poorly plasmonic components. 33,34−37 Such complexes demonstrated light-enhanced catalytic performance in a wide range of reactions including H 2 dissociation and NH 3 decomposition. 35,36Recently, the photocatalytic activity of plasmonic Mg-based complexes was demonstrated, with Mg−Pd NPs achieving a 2-fold decrease in the activation energy and enhanced selectivity under light excitation for acetylene hydrogenation. 38Further, the photocatalytic activity of bimetallic Mg−Au nanostructures in the reduction of 4-NBT to DMAB was demonstrated using tipenhanced Raman spectroscopy (TERS), 39 verifying the coupling between the Mg core and Au decorations. Here, we demonstrate the application of Mg faceted spheroids as SERS substrates and SERS-trackable photocatalysts.We examine the SERS signal from 4-MBA and 4-NBT and calculate the EF for Mg NPs under 532 and 633 nm laser excitations.We show that the reductive coupling of 4-NBT to DMAB proceeds on the surface of Mg NPs protected by a native oxide layer under 532 nm laser excitation in the absence of additional catalytically active components and reducing agents.We further extend the investigation to Mg NPs decorated with Pd (Mg−Pd NPs) and compare their SERS signal, EF, and catalytic behavior to those of Mg NPs.We rationalize and support our SERS findings with experimental near-field studies with monochromated scanning transmission electron microscopy coupled with electron energyloss spectroscopy (STEM-EELS) and numerical results obtained in the discrete-dipole approximation (DDA).This successful demonstration of SERS by Mg NPs unravels the potential of this earth abundant and biocompatible material for sensing applications, while the ability of Mg to drive surface reactions under light excitation further extends its application range in plasmon-enhanced catalysis. RESULTS AND DISCUSSION Characterization of Mg NPs.Mg faceted spheroids (Figures 1A and S1) were synthesized colloidally following a recently published seed-mediated growth method. 40The resulting NPs are considerably more isotropic than the Mg hexagonal single crystal and twinned nanoplatelets previously studied optically. 15,41The average size of the long NP axis and its standard deviation, determined from scanning electron microscopy (SEM), are 121 ± 10 nm (Figure 1B).The extinction profile of a colloidal suspension of the NPs in isopropyl alcohol (IPA) displays a strong LSPR peak centered at 574 nm (Figure S2). Colloidally synthesized Mg NPs have a native, self-limiting MgO layer that is protective in gas phase oxidative environments up to 400 °C42 but susceptible to water. 40,43The incubation of Mg NPs with the Raman reporter molecules 4-MBA and 4-NBT 44−48 was thus performed in IPA to avoid degradation of Mg; ethanol and methanol could also be used but lead to poorer colloidal stability than IPA.NPs retain their metallic cores postincubation, as confirmed by the Mg bulk plasmon peak indicating metallic Mg, at ∼10.1 eV in STEM-EELS (Figures 1C,D and S3).No detectable change in the native oxide layer thickness was observed after the incubation.The oxide thickness of up to 10 nm, as mapped by STEM energy dispersive X-ray spectroscopy (STEM-EDS) maps (Figures 1E and S4), is in good agreement with that previously reported for as-synthesized Mg NPs. 15,49Note, this value is an upper bound as the analytes can contribute to the oxygen signal, and the oxide layer may appear thicker due to STEM projection effects from a faceted 3D object. 4-MBA and 4-NBT bind to the surface oxide of Mg NPs.The incubated Mg NPs were washed three times in IPA prior to STEM and SERS measurements.STEM-EDS maps reveal a localization of the S signal on the NPs (Figure 1E), and SERS signals from both analytes are observed (see next section).While thiols are not expected to interact in the same way with an MgO surface as in the known covalent Au−S bond, 44 MgO has been studied for use as a desulfurizer to remove gaseous pollutants such as SO 2 and H 2 S 50,51 because it offers effective adsorption sites for those molecules at room temperature. 52−57 H 2 S adsorbs to the Mg 2+ sites in MgO through S. 52 Analogously, 4-MBA and 4-NBT can be expected to bind to MgO via a deprotonated thiol formed in solution.Other interactions such as through the carboxyl group in 4-MBA and the nitro group in 4-NBT are possible, given that both acetic acid and NO 2 can adsorb on MgO. 51,58inding and SERS Spectra of 4-MBA and 4-NBT on Mg NPs.Mg NPs enhance the Raman signal of surface-bound 4-MBA and 4-NBT at both 532 and 633 nm.SERS spectra were obtained from analyte-incubated colloidal Mg NPs deposited on membrane filters to produce dense regions of dry NPs (Figure S5). 59Differences observed when comparing analytes on dry NPs to their normal Raman spectra (Figure 2A With 4-MBA, 532 and 633 nm excitation give rise to identical spectral features, and no change to the spectra was observed over time, indicating the stability of the analyte on the surface of the NPs.The SERS signal intensity of the 1080 cm −1 peak of 100 randomly selected regions on dry Mg NPs is on average 42% higher at 532 nm compared to 633 nm (Figure 2D).The 4-MBA SERS spectra are in good agreement with previous studies on Au and Ag, [45][46][47][48]60 as well as on metal oxide semiconductor substrates.61−63 The position of SERS peaks indicates the presence of a deprotonated state (COO − ), as discussed in detail in the Supporting Information (SI). Furthr, the absence of S−H stretching and bending modes near 2580 and 915 cm −1 , 45,48 respectively (Figure S7), implies dissociation of the S−H bond.Finally, the intense bands at 1080 and 1593 cm −1 in the SERS spectrum of 4-MBA arise from the aromatic ring [45][46][47][48]60,64 and are, as described in the SI, assigned to D 6 (containing C−S stretching) and D 3 modes, respectively. 65 The binding geometry of 4-MBA on the surface of the Mg NPs can be deduced from the SERS spectra. Th carboxylate anion stretching mode near 1417 cm −1 indicates that COO − is not involved in binding to Mg NPs, as otherwise the mode would appear at lower wavenumbers.45,64 Further, the absence of the out-of-plane ring vibration mode (estimated to be D 17 ) near 718 cm −1 confirms that 4-MBA is not lying flat along the surface, 48,60,64 and the presence of the D 1 and D 2 modes (containing aromatic C−H stretching) at 3063 cm −1 (Figure S7) indicates that 4-MBA is positioned perpendicular to the surface of Mg NPs with the carboxyl group pointing away from the surface.64,66 Therefore, 4-MBA molecules are coordinated to Mg NPs through the S atom, similar to Au and Ag.64 The SERS spectrum of 4-NBT at 633 nm also agrees with spectra on Au and Ag.[67][68][69]32,70 As with 4-MBA, the 4-NBT S− H stretching mode at 2549 cm −1 in normal Raman is not present in SERS (Figures S7 and S8), implying binding through S 69 (detailed mode assignments in the SI).At 532 nm, 4-NBT undergoes reductive coupling to DMAB. This rection occurs much faster at 532 nm than at 633 nm on Ag NPs 71 and only at 532 nm with Mg NPs; this catalytic coupling will be discussed later. We also observe a mode in the 4-NBT SERS spectrum not commonly reported in the literature: a peak at 1295 cm −1 .This shoulder is lower in energy than the N−O stretching mode and is attributed to the 4-NBT anion (4-NBT − ).This feature has been reported with TERS, where the anion peak appeared at 1305 cm −1 on Au and 1289 cm −1 on Ag. 72,73 Likewise, Choi et al. calculated that the anion radical of 4-NBT and its conjugate acid can induce a shift of the N−O stretching mode to below 1300 cm −1 . 74The reduction of 4-NBT can, in principle, result in the formation of 4-aminobenzenethiol (4-ABT) or DMAB; 74−77 however, neither is observed here, and we note that the 4-NBT spectrum remained unchanged over prolonged 633 nm excitation. SERS is also observed for analyte-incubated Mg NPs dispersed in IPA.The spectra (Figure 3A−D, top; full spectra in Figures S9 and S10) include contributions from IPA (Figure 3E,F; full spectra in Figure S11), but the dominant analyte SERS peaks are clearly visible.In all cases, the D 3 mode near 1580 cm −1 is spectrally distinct from the IPA Raman bands.Meanwhile, the normal Raman spectra of concentrated 4-MBA and 4-NBT (Figure 3A−D, bottom) solutions in IPA show the expected solvent peaks and unbound analyte peaks, which differ from those of SERS. The peaks observed in the SERS spectra of 4-MBA on colloidal Mg NPs are consistent with those on dry NPs.Meanwhile, the SERS spectra of 4-NBT from colloidal Mg NPs are different from those of dry NPs.The most pronounced difference appears in the N−O stretching mode at 633 nm excitation.With colloidal NPs, a broad peak is observed at 1302 cm −1 , shifted from the SERS peak on dry NPs (1335 cm −1 ) and the normal Raman peak (1338 cm −1 ).This shift to lower wavenumbers is attributed to the presence of 4-NBT − ; 72,73 the broadening indicates the coexistence of the neutral and anionic form.At 532 nm, this region displays an N−O stretching band at 1312 cm −1 , indicative of 4-NBT − and a shoulder at 1343 cm −1 from 4-NBT.The difference in the relative intensities of the N− O stretching modes of the anion and 4-NBT molecule at different wavelengths suggests that the incident electromagnetic radiation plays a role in the bond dissociation of 4-NBT on Mg NPs.The prominence of 4-NBT − on colloidal NPs, compared to dry NPs, can be explained by its stabilization in IPA.The SERS spectrum of 4-NBT on colloidal NPs does not change over time, unlike that of dried NPs at 532 nm. EF of Mg NPs.Plasmonic effects largely contribute to the enhancement observed in SERS.Mg NPs comprise a metallic (confirmed by EELS, Figure 1) plasmonic core with a ∼10 nm oxide layer; we have previously demonstrated experimentally and numerically that this layer only minimally affects the LSPRs of Mg NPs. 15,41However, the oxide acts as a spacer separating the analyte molecules and the metal surface, 15,43,49 such that a modest SERS EF is expected. The colloidal SERS and normal Raman spectra are used to calculate the EF of Mg NPs using 78 where I SERS and I Raman are the intensities of a vibrational mode, and N Surf and N Vol are the number of analyte molecules probed in SERS and normal Raman, respectively.The intensities of the D 3 mode at ∼1580 cm −1 are used for I SERS and I Raman , as this peak does not overlap with any IPA Raman bands and is present for both analytes.N Surf is calculated by estimating a monolayer coverage on the surface of NPs, an approach commonly used to calculate SERS EFs. 79,80We approximate the shape of a faceted spheroid as a sphere, e.g., a 121 nm diameter NP that has a surface area of 4.6 × 10 4 nm 2 .Using 1.738 g cm −3 for the density of Mg, 14 the concentration of Mg in colloids from ICP-OES analysis (Table S1), a thiol footprint of 0.22 nm 2 , 79 and a scattering volume (volume of illumination) of 6.54 × 10 −17 m 3 , N Surf is calculated to be 1.09 × 10 7 and 1.05 × 10 7 molecules for 4-MBA and 4-NBT adsorbed on Mg NPs, respectively.N Vol is calculated by multiplying the concentration of the analyte solution (0.1 M for 4-MBA and 0.01 M for 4-NBT) by the scattering volume. The EFs for both analytes at 532 and 633 nm are on the order of 10 2 (Table 1).To assess the validity of the monolayer surface coverage approximation, the S content was quantified by ICP-OES for the 4-MBA sample.The N Surf obtained with this alternative method is 8.83 × 10 6 molecules, consistent with the monolayer estimation (1.09 × 10 7 molecules) and resulting in EFs of ∼10 2 (Table S2).The EF of Mg NPs is higher at 633 nm, despite the SERS signal being higher at 532 nm in both dry and colloidal NPs.This effect occurs because the normal Raman signal, hence cross-section, is smaller for both analytes at 633 nm. In comparison, a control experiment performed using 58 nm spherical Au NPs incubated with 4-MBA analyte under equivalent conditions leads to an EF of 10 2 with monolayer approximation and 10 4 when using ICP-OES of S content S1).The 4-NBT concentration in SERS was estimated from the incubation solution concentration and subsequent cleaning steps.Full raw spectra without background subtraction are reported in Figures S9−S11.(Figure S12 and Table S3), confirming that the monolayer approach produces an underestimate of the EF.The EFs reported in Table 1 are conservative estimates.First, the monolayer approximation and the ICP-OES of S both overestimate the number of analytes bound to the NP surface, with the former even more than the latter.Since the interaction of MgO with S is expected to be weaker than covalent binding between Au and S, the maximum packing can hardly be achieved on Mg NPs.Therefore, N SERS is likely overestimated and EF underestimated.ICP-OES reports on all S-containing species, including those on the surface and in the solution.Using this concentration leads to the calculation of an upper bound for N SERS and a lower bound for the EF.Further, eq 1 assumes that the scattering volume is identical between SERS and Raman measurements.Though the same acquisition conditions were used, the Mg NP colloids were opaque, while the analyte solutions were clear.This difference leads to a smaller effective scattering volume for the Mg-containing solution (SERS) and therefore a higher EF than what is calculated. The enhancement from Mg NPs is not sufficient to observe SERS signal from single particles.Correlated dark field optical scattering spectroscopy, SERS, and SEM of Mg NPs (Figures S13 and S14) show that neither single particle nor small (<30 NPs) aggregates generate sufficient enhancement for signal detection.In addition, this correlation reveals the heterogeneity of random aggregates, both in size and arrangement, leading to a heterogeneous scattering response. Finally, we argue that the enhancement obtained from Mg NPs is predominantly an electromagnetic effect.SERS mediated by a chemical enhancement mechanism alone would be independent of the NP shape.However, in this study, we do not observe SERS for Mg hexagonal nanoplatelets. 43As opposed to platelets, the formation of electromagnetic hot spots for spheroids is not sterically hindered, leading to a strong enhancement. Electromagnetic Localization and Enhancement by Mg NPs.SERS is commonly attributed to electromagnetic hot spots, 81 such as those formed at the corners of sharp NPs and between NPs in aggregates.Here, we experimentally probed the hot spots formed around and between Mg faceted spheroids using monochromated STEM-EELS and supported these observations with numerical simulations for both electron beam and light excitations. LSPR modes from a single elongated Mg faceted spheroid were first probed with STEM-EELS.The most intense mode is the dipole resonance along the NP's longitudinal axis near 2.5 eV, stretching beyond the oxide shell.Figure 4 shows the excitation probability from 2.45 to 2.55 eV, where this dominates.To extract spectral information and higher order modes, we also performed blind source separation with nonnegative matrix factorization (NMF; Figure S15).A Lorentzian line shape fitted to the longitudinal dipole peak in the NMF spectral component revealed the peak energy to be 2.46 eV (504 nm).This resonance energy is comparable to the extinction peak observed in bulk UV−vis/NIR (Figure S2), with minor differences attributed to the different dielectric environments.Modes at higher energies were also present and extracted with NMF, with the expected shape-dependent energies and distributions (Figure S15). We then confirmed experimentally and numerically that dimers of plasmonic Mg NPs form prominent hot spots.−85 The former gives rise to an optically bright gap hot spot, while the antibonding resonance is optically dark. 85With an electron beam, it is possible to spatially map the excitation energy and localization, leading to mode identification, as shown above for an isolated NP, for both bright and dark modes.For the dimer in Figure 5A−D, we extracted a bonding mode energy of 1.64 eV (756 nm) with NMF.This mode shows strong electron excitation probability at the longitudinal tips of the dimer.As expected, there is no excitation in the interparticle gap because of the radial field symmetry of an electron that can interact with the antibonding mode instead.However, when excited with light, the bonding dipole is indeed the gap mode that leads to a strong enhancement, as simulated in Figure 5G.Higher order modes and antibonding gap modes are also observed at higher energies and reported in Figure S16. Numerical results support the mode assignment and electron loss distribution.We performed simulations in the electrondriven discrete-dipole approximation (e-DDA) 82 for a dimer consisting of Wulff construction-generated faceted spheroids (Figure S17). 86The total size of each NP, measured from tip to tip, is 120 nm, including a metallic Mg core and an outermost 10 nm MgO layer.The NPs were positioned facet-to-facet with a 2 nm interparticle gap and rested on a 20 nm Si 3 N 4 layer, reproducing the experimental parameters as closely as possible.A simulated point spectrum for an electron beam trajectory just outside the dimer (Figure 5F) reveals features comparable to those obtained experimentally, with a distinct peak at low energy (2.30 eV) and multiple intense peaks at higher energies corresponding to the broad and intense experimental peak.The simulated electron loss probability map (Figure 5E) also agrees well with the experimental results (Figure 5C), depicting excitation of the bonding mode around the longitudinal tips of the dimer. The thickness of the oxide layer has only minimal effects on the coupled dimer behavior.Further numerical simulations (Figures S18 and S19) were performed to compare the spectral response of dimers with increasing oxide layer thicknesses.The spectral features of the simulated STEM-EELS data (Figure S18) present only minor differences: the low energy peak redshifts from 2.30 to 1.80 eV, while its relative intensity increases as the oxide shell is thinned from 10 nm to none. DDA simulations performed on a Mg NP dimer excited by light reveal the formation of a hotspot and enable the determination of an oxide thickness-dependent field enhancement.We first observe that dimers, as discussed above, produce an electromagnetic hot spot when illuminated with light (Figure 5G).The highest scattering efficiency at the coupled dipolar peak occurs at 2.25 eV (Figure 5H) for a dimer with a 10 nm MgO shell.Numerical spectra of dimers with thinner oxide layers (Figure S19) reveal slight mode energy shifts and an increase in the scattering cross section for thinner oxide shells. The calculated electromagnetic EF agrees well with the experimental results.We calculated the EF as |E| 4 /|E 0 | 4 and used numerical inputs that match experimental conditions, including the surrounding IPA medium. 78,87We simulated excitation edge-to-edge in the monomer and along the interparticle axis for the dimer, then calculated their average electromagnetic EFs, i.e. average surface |E| 4 /|E 0 || 4 (Table S4).The EF was calculated at 532 nm to match with the experimental excitation wavelength (Table 2), and at the corresponding maximum Q sca (Table S4). Despite the infinity of possible particle and aggregate configurations, these simple models allow us to investigate the oxide layer effects on the EF, which are not addressable experimentally.Note that the alternative approximation using where ω and ω′ are maximum Q sca and stoke-shifted frequencies, 8,88 gives EFs comparable to those of |E| 4 /|E 0 | 4 (Table S4). The numerical EFs calculated at 532 nm for the monomer and the dimer are 45 and 119, respectively, in the presence of a 10 nm oxide layer and 93 and 33533, respectively, in its absence (Table 2 and Figure S20).The experimental EFs are of the same order of magnitude as the EF values calculated for 5 and 10 nm oxide layers.This is consitent with the oxide layer thickness obtained from STEM-EDS (Figure 1E). Binding, SERS Spectra, and EF of 4-MBA and 4-NBT on Mg−Pd NPs.Mg NPs decorated with 3.3 mol % Pd (Mg−Pd NPs) enhance the electric near-field for SERS.Mg−Pd NPs were synthesized by partial galvanic replacement of colloidal Mg NPs by Na 2 PdCl 4 (Figure 6A), as previously reported. 49uccessful decoration of Mg NPs with Pd is confirmed by STEM-EDS (Figure S21) and HAADF-STEM images (Figures 6B and S22).The Pd content was measured by ICP-OES to be 3.3 mol % (Table S1), and no change to the LSPR peak in the UV−vis/NIR extinction spectrum was observed (Figure S23), confirming the retention of the majority of the plasmonic, metallic Mg core.though it may be obscured by the peak at 1335 cm −1 , which is broader on Mg−Pd compared to Mg NPs.No change over time in the spectra of 4-MBA at 532 and 633 nm and 4-NBT at 633 nm has been observed.The change to the relative peak intensities suggests a different binding geometry on Pd sites compared to MgO.The geometry can be deduced from the SERS spectra of 4-MBA.The absence of the C=O vibration at 1710 cm −1 indicates that 4-MBA is deprotonated 45,48,60 and hence that the lower relative intensities of the COO − peaks are not due to the presence of COOH.Similarly, the absence of S−H modes at 2580 and 915 cm −145,48 and the presence of the carboxylate anion peak near 1416 cm −1 are consistent with binding through S and not COO − . 45,64So far, these geometry indicators match those of 4-MBA on the Mg NPs.However, in Mg−Pd NPs, the out-of-plane ring vibration mode (estimated to be D 17 ) and the D 4 mode (containing C−H in-plane bending) are present near 718 (Figure S24) and 1482 cm −1 (Figure 6C,D), respectively, indicating analyte molecules positioned flat or at an angle to the Pd surface. 46,48,60The D 1 and D 2 modes (containing aromatic C−H stretching) at 3063 cm −1 (Figure S24) stem from a perpendicularly oriented 4-MBA and are still present in the 532 nm Mg−Pd SERS spectrum, confirming that analytes bound to MgO sites have the same perpendicular binding geometry as they do on Mg NPs. 64,66he SERS peaks in the colloids ( The EF of Mg−Pd colloidal NPs based on the monolayer surface coverage approximation is calculated to be ∼10 3 , an order of magnitude higher than that of Mg NPs (Table 1).The calculation employs Mg and Pd concentrations from ICP-OES (Table S1), and 12.007 g cm −3 for the density of Pd, 92 in addition to the parameters described previously.N Surf is calculated from the total surface area for Mg−Pd NPs, assuming 121 nm spheres of Mg containing the amount of Pd determined by ICP-OES (Table S1), such that the surface area is similar to S1).The 4-NBT concentration in SERS are estimated from the incubation solution concentration and subsequent cleaning steps.Equivalent SERS data at 633 nm and full raw spectra without background subtraction are reported in Figures S25−S27 that of Mg spheres.An alternative approach to calculating the surface area, using Pd spheres on Mg spheres, led to substantially similar values (SI and Table S5).As with Mg NPs, the EFs of Mg−Pd NPs are higher at 633 nm excitation than at 532 nm.A calculation using the ICP-OES-determined S content led to an EF in the order of 10 2 (Table S6), a value larger than that for Mg NPs, yet smaller than the surface coverage estimate due to the overestimation of N Vol , as discussed previously. The higher EF in Mg−Pd NPs compared to Mg NPs is a result of the different orientation taken by 4-MBA and 4-NBT.As described earlier, some molecules are oriented at an angle on the surface of Mg−Pd NPs, while they were perpendicular to the surface of Mg NPs.As a result, D 6 (1080 cm −1 ) and D 3 (near 1580 cm −1 ) modes are enhanced further in Mg−Pd NPs. 46ecause the D 3 mode was the only peak available without an overlap with the signal from IPA, it was used for the calculation of the EF.Without the additional enhancement induced from the orientation of the molecule, the magnitude of electromagnetic enhancement from plasmonic Mg−Pd NPs is likely similar to that of Mg NPs. Coupling Reaction of 4-NBT to DMAB on Mg and Mg− Pd NPs.The SERS spectra of 4-NBT on dry Mg NPs at 532 nm (Figures 8A, S28, and S29) indicate the formation of DMAB, a product of the reduction of 4-NBT molecules, through a widely reported plasmon-driven reductive coupling reaction. 93Timeresolved spectra show that the intensity of the N−O stretching mode from 4-NBT (1337 cm −1 ) decreases while peaks corresponding to DMAB at 1147, 1394, and 1446 cm −1 gradually evolve.The peak at 1147 cm −1 can be assigned to the C−N stretching mode, while the peaks at 1394 and 1446 cm −1 are −N=N− stretching modes of DMAB. 32,94he reaction also produces 4-NBT − species, evidenced by the presence of the N−O stretching mode at 1307 cm −1 .The intensity of this peak decreases rapidly in the early stages of the reaction and eventually vanishes.The disappearance of the 4-NBT − species could indicate its conversion to DMAB and/or desorption in the form of 4-NBT. Not all 4-NBT molecules are converted into DMAB.Across the four laser powers used (3.3, 42.4,86.0, and 133.2 μW at the sample), none led to the complete conversion of 4-NBT to DMAB, as evidenced by the presence of the N−O stretching mode of 4-NBT at 1337 cm −1 even after the DMAB signal plateaus (Figures 8B and S30).The reactions appear to be irreversible: the spectra did not revert to their initial form nor undergo further changes even after prolonged periods without laser irradiation. The catalytic performance can be assessed based on the ratio of the DMAB to the 4-NBT peaks after the DMAB signal reaches a plateau.Here, we use the ratio between the peaks at 1446 (DMAB) and 1337 cm −1 (4-NBT) after 152.5 s of reaction time.The peak ratio is known to increase with laser power as a result of the increased conversion, 32,75 and more specifically, in areas with higher electromagnetic field. 31Here, the ratio varied slightly from region to region owing to enhancement differences, and therefore, the data in Figure 8C report the average of 20 regions.The ratio of the DMAB to the 4-NBT peaks is 0.27, 0.45, 0.31, and 0.31, at 3.3, 42.4,86.0, and 133.2 μW, respectively (Figure 8C). The resulting DMAB to 4-NBT peak ratio is lower than the 0.56 reported for Ag NPs, 31 as expected due to the lower field enhancement of MgO-covered Mg.The decrease in the ratio with higher laser power is attributed to the higher rate of desorption, indicated by the decline of both 4-NBT and DMAB peak intensities (Figure S30).The slope of the decline increases with laser power and is steepest at 133.2 μW.This desorption dependence on laser power has previously been reported for 4-NBT on Au NPs, 68 with a presumed involvement of plasmonic effects beyond photothermal heating. 72he plasmon-driven reductive coupling of 4-NBT is believed to proceed via transferring the hot carriers, produced from plasmonic NPs, to the molecules on the surface. 72,95Although Golubev et al. highlighted the contribution from thermal effects, 96 and Mg NPs can produce heat output comparable to Au NPs, 97 Keller and Frontiera suggested that heat is not the dominant mechanism. 98Here, the reaction proceeding at 532 nm, where Mg's photothermal efficiency is lower, 97 but not at 633 nm, supports the hot carrier-mediated mechanism despite the oxide layer.Medeghini et al. recently showed that a small amount of hot carriers from Au nanorods can pass through a mesoporous silica layer of similar thicknesses to the MgO layer in Mg NPs (∼10 nm), 99 supporting this observation.An alternative mechanism could involve the O 2 − radicals formed in the oxygen vacancies of MgO. 100 Indeed, the hot carrier origin and facilitating role of O 2 − radicals in 4-NBT coupling has been proposed by Zhang et al. 101 In this study with Mg NPs, we are unable to unequivocally discern a dominant mechanism behind the reductive coupling of 4-NBT. The coupling reaction of 4-NBT on Mg−Pd NPs (Figures 8D, S31, and S32) proceeds with several differences compared to Mg NPs.First, the rate of 4-NBT to DMAB transformation is higher on Mg−Pd NPs than on Mg NPs and reaches a plateau quicker at equal laser power, as expected due to the presence of catalytically active Pd (Figures 8E and S33).Consequently, we chose lower laser powers (3.3, 11.5, 22.3, 32.4, and 42.4 μW) for the time-resolved SERS spectra of Mg−Pd NPs (Figure 8E).Second, more 4-NBT − is produced on Mg−Pd NPs at the initial stage of the reaction, as revealed by the early appearance of an intense N−O stretching mode at 1308 cm −1 .Unlike with Mg NPs, the anion peak is initially more intense compared to the same mode in 4-NBT (1334 cm −1 ), indicating the increased proportion of 4-NBT − ; it eventually falls below that of the 4-NBT.Since the 4-NBT − peak is not observed at 633 nm, the dominance of the peak at 532 nm suggests that the 4-NBT − formation is driven by electromagnetic field irradiation.Finally, the presence of Pd on Mg NPs increases the ratio of the DMAB (1447 cm −1 ) to the 4-NBT peaks (1334 cm −1 ) after 147.5 s of reaction time (Figure 8F).The ratio is highest under 11.5 μW laser power, at 0.90, which is double the maximum from Mg NPs.As seen with Mg NPs, the ratio of the DMAB to the 4-NBT peaks decreased with a higher laser power likely due to desorption processes taking place at higher powers.Still, when comparing at equal laser power, the ratio at 42.4 μW with Mg− Pd NPs is 0.54, i.e., 20% higher than the ratio obtained on Mg NPs at this power.With the LSPR of Mg coupling to Pd, hot carriers could be generated in the Pd and transferred to 4-NBT, similarly to what was observed for Pd-decorated, oxide-coated Al NPs. 33Note that the differences in the ratios between Mg−Pd and Mg NPs can be overestimated since 4-NBT − , produced in larger proportion on Mg−Pd, is not included in the calculations. CONCLUSION We demonstrated the application of Mg and Mg−Pd faceted spheroids for SERS and SERS-trackable plasmon-driven catalysis.The SERS EF measured using 4-MBA and 4-NBT analytes at 532 and 633 nm was on the order of 10 2 and 10 3 for Mg and Mg−Pd NPs, respectively.LSPR modes in Mg NPs and dimers were experimentally mapped and indicated that dimers formed coupled modes, as confirmed by numerical simulations with electron beam and light excitations.Simulations also provided a calculated value for the EF of Mg NP dimers, which matches the experimental results. By analyzing the SERS spectra of 4-MBA, we determined that the analyte was bound to the surface of NPs through S, with a perpendicular orientation on MgO surfaces and a tilted or flat orientation on Pd surfaces.While the decarboxylation of 4-MBA was not observed, 4-NBT was converted to DMAB on the surface of dry Mg and Mg−Pd NPs under 532 nm excitation through a plasmon-driven reductive coupling reaction.The final SERS peak ratio of DMAB to 4-NBT was higher for Mg−Pd NPs than Mg NPs, and the former also displayed a higher rate of reaction.Whereas in Mg−Pd, the reaction can be mediated by hot carriers generated in Pd due to the near field effects in plasmonic Mg, the mechanism of the coupling on Mg NPs is not fully understood and requires further investigation. The results presented here further validate the applicability of Mg NPs as plasmonic material.Mg's capability to form SERS substrates provides potential in sensing applications.The demonstration of molecular binding through S suggests that the rich library of approaches relying on Au−S affinity could be applicable to Mg NPs.Further, the Mg NPs' ability to drive lightinduced reactions on their surface, confirmed by SERS, solidifies Mg's attractiveness as an earth abundant platform for plasmonenhanced catalysis. Synthesis of NPs.Mg faceted spheroids were synthesized using the previously reported one-pot seed-mediated growth method. 40Briefly, di-n-butylmagnesium (MgBu 2 ) in heptane (1.75 mL, 1.0 M) was injected quickly into freshly prepared Li 2 Napht solution containing poly(vinylpyrrolidone) (20 mg) in a Schlenk flask under an Ar atmosphere at room temperature and sonication (WARNING!MgBu 2 is pyrophoric and should be handled under inert conditions.).Naphthalene in THF (2 mL, 1.0 M) was added to the reaction mixture after 5 min reaction time, converting all unreacted Li 2 Napht to LiNapht.The resulting mixture was left to react for further 60 min before being quenched by the injection of IPA (2 mL).The solid gray product was recovered by centrifugation (10,000 rcf) and residual byproducts were removed by rounds of centrifugation (10,000 rcf) and redispersion steps in THF twice, IPA once, THF once, and IPA twice, in the listed order under inert conditions.The product was redispersed in IPA (15 mL). Mg−Pd NPs were prepared by partial galvanic replacement of colloidal Mg faceted spheroids using the previously reported procedure. 49In brief, colloidal Mg NPs (1 mL) were diluted with IPA (2 mL) before adding a solution of sodium tetrachloropalladate (Na 2 PdCl 4 ) in IPA (3 mL).The stoichiometric amount of Na 2 PdCl 4 was calculated based on the Mg content of as-synthesized Mg NPs obtained from ICP-OES.The resulting mixture was left to react for 1 h in a sealed vial under stirring.The solid gray product was recovered by centrifugation and residual byproducts were removed by rounds of centrifugation and redispersion steps in IPA three times.The product was redispersed in IPA (8 mL). Au NPs (46 nm, citrate capped) were synthesized using a seededgrowth method.Au NP seeds (12 nm, citrate capped) were synthesized using a modified Turkevich methodology described by Schulz et al. 102 In brief, a solution (80 mL) containing citrate buffer (3:1 trisodium citrate/citric acid, 2.75 mM) and ethylenediamine tetraacetic acid (EDTA, 0.02 mM) was heated to boil under vigorous stirring for 10 min before adding an aqueous solution of gold chloride (0.8125 mM, 20 mL).The resulting reaction was left to boil for further 20 min under stirring, during which the reaction mixture turned red, producing Au NP seeds.The Au NP seeds were grown using the growth method reported previously, 103 involving successive additions of gold chloride and citrate.Briefly, trisodium citrate (34 mM, 2 mL) was added to distilled water (82.5 mL) and heated to boil.As-synthesized Au NP seeds (2 mL) were added to the boiling mixture, followed by the addition of gold chloride (6.8 mM, 1.7 mL) after 1 min.The reaction mixture was heated to reflux for 45 min.Further successive additions of trisodium citrate (34 mM, 2 mL) and gold chloride (6.8 mM, 1.7 mL) were performed 5 times, with the mixture remaining under reflux for 45 min between growth steps.The size of the NPs was determined using UV−vis/NIR spectroscopy with the method proposed by Haiss et al. and confirmed with SEM. 104The resulting colloidal Au NPs (40 mL) were centrifuged (5000 rcf) and redispersed in IPA (10 mL) before use. SERS and Raman Measurements.As-synthesized Mg NPs (5 mL) were mixed with a solution of Raman reporter molecules (4-MBA or 4-NBT, 0.01 M, 5 mL) in IPA under inert atmosphere and incubated overnight.The excess Raman reporter molecules were removed by three rounds of centrifugation (10,000 rcf) and redispersion steps using IPA, and the resulting NPs were redispersed in IPA (2 mL), all under inert conditions.The above procedure was repeated with Mg−Pd NPs (4 mL) and Au NPs (10 mL) using equal volumes of Raman reporter molecules solutions (0.01 M) to NPs, but unlike with Mg NPs, Mg−Pd NPs were redispersed in 1.6 mL of IPA, while Au NPs were redispersed in 2 mL of IPA.The resulting colloidal NPs (∼1.4 mL) were transferred to a cuvette under inert atmosphere and sealed with a septum screw cap for colloidal SERS measurements.For SERS measurements of dry NPs, colloidal NPs (100 μL) were deposited by a continuous feeding of NPs on PES membrane filters under vacuum. SERS and Raman spectroscopy were performed using a HORIBA Jobin Yvon LabRam 300 Raman system equipped with a CW 532 nm Nd:YAG laser (up to 500 mW power), a 633 nm HeNe laser (20 mW power), O.D. filters ranging from 0.01 to 100%, Olympus BXFM-ILHS microscope with motorized z-axis of freedom, a motorized x,yadjustable stage, HORIBA Syncerity detector, and LabSpec 6 Spectroscopy Suite software.An Olympus LMPlanFl 50×/0.50objective and a 600 g/mm grating were used for all measurements reported here.The beam diameter at sample was 2.5 μm.The laser power at sample position was measured using a Thorlabs slim Si sensor (400−1100 nm, 500 pW−500 mW) connected to a Thorlabs PM100D digital console.All data were converted to analogue to digital converter units (ADU) before analysis by the division of laser power at sample position and integration time.Data analysis was performed using Origin Pro. SERS on 4-MBA-bound dried NPs on membrane filters was conducted using 86.0 μW (532 nm) and 54.0 μW (633 nm) laser power at sample position.100 regions were randomly selected across the layer of each sample and a SERS spectrum at each region was collected by averaging over 3 consecutive acquisitions, each with 30 s integration time.For every sample, spectra from 100 regions were averaged before a background was fitted by Spline polynomial interpolation and then subtracted.The peak at 1080 cm −1 was fitted using the unaveraged spectra from 100 regions individually, by subtracting a linear background between regions before and after each peak and finding the maximum intensity within the region, to obtain intensity distribution. SERS on 4-MBA-bound dried single NPs and single aggregates dropcast on coverslips was conducted using 86.0 μW (532 nm) laser power at sample position.The SERS spectrum at each region of interest was acquired by averaging over 10 consecutive acquisitions, each with 60 s integration time. SERS on colloidal NPs was performed using 3.84 mW (532 nm) and 2.0 mW (633 nm) laser power at sample position.The cuvettes containing the samples were positioned under the microscope and the beam was focused on the colloidal region closest to the cuvette surface.For each SERS measurement, a normal Raman spectrum was acquired using solutions of Raman reporter molecules (0.1 M solution of 4-MBA in IPA and 0.01 M solution of 4-NBT in IPA) to allow the evaluation of EF using peak intensity.The focus position was kept constant between SERS and normal Raman acquisitions of same Raman reporter molecules.SERS and normal Raman spectra were acquired 10 times for each sample, with every measurement being averaged over 3 consecutive acquisitions each with 30 s integration time.Samples were mixed by shaking the cuvettes in between measurements to ensure an even distribution of NPs.For every sample, spectra from 10 measurements were averaged before background was fitted by Spline polynomial interpolation and then subtracted. SERS-based monitoring of the plasmon-driven coupling of 4-NBT to DMAB was carried out using the 532 nm laser at stated laser powers.Unless otherwise stated, time-series SERS spectra were acquired at 20 individually selected regions for each sample, using 5 s acquisition time for a total of 150 s at each region.Acquisition for each region began at the time the laser was turned on.The half point time in each acquisition from turning on the laser was used as time stamp.For every time stamp from the same sample, spectra from 20 regions were averaged before background was fitted individually by Spline polynomial interpolation and then subtracted, unless specified otherwise.The peaks (1307 cm −1 for 4-NBT − , 1338 cm −1 for 4-NBT, and 1447 cm −1 for DMAB) were fitted using the unaveraged spectra individually by subtracting a linear background between regions before and after each peak and finding the maximum intensity within ±5 cm −1 peak window.The peak intensities were then averaged at every time stamp for the same sample.The ratio of DMAB/4-NBT was calculated by dividing the DMAB intensity by 4-NBT intensity from each unaveraged spectrum and then averaging the ratio at every time stamp for the same sample. Characterization of NPs.UV−vis/NIR spectroscopy was performed using Thermo Fisher Evolution 220 UV−visible spectrophotometer with the sample in a PMMA semimicro cuvette at room temperature. SEM imaging of as-synthesized Mg NPs drop-cast on Si wafers and of 4-MBA-bound Mg NPs drop-cast on borosilicate coverslips was performed on FEI Nova NanoSEM operated at 5 kV and equipped with an Everhart-Thornley detector for secondary electron imaging.The latter were carbon coated prior to imaging.Mg NPs on membrane filters were cut-out, attached to glass coverslips, and carbon coated for SEM imaging, which was performed on FEI Quanta-650F Field Emission Gun SEM operated at 5 kV and equipped with an Everhart-Thornley detector for secondary electron imaging. TEM imaging and STEM-EDS line scans of Mg−Pd NPs drop-cast on a 10 nm thick Si 3 N 4 membrane was performed on FEI Tecnai Osiris operated at 200 kV and equipped with a Gatan UltraScan1000XP (2048 by 2048 pixel) camera and an FEI Super-X quadruple EDS detector.STEM-EDS line scans were processed using an open-source software, Hyperspy. 105For Mg Kα (1.25 keV) and Pd Lα (2.84 keV) lines, a linear background was fitted between the regions below and above the peaks, and lines were integrated above the background to obtain elemental distribution.The integration was set to the extended energy resolution of Mn Kα from the detector. HAADF-STEM, STEM-EDS, and STEM-EELS of Mg NPs with Raman reporter molecules were acquired on Thermo Fisher Spectra 300 TEM equipped with a high energy resolution extreme field emission gun monochromator (X-FEG Mono), a Panther segmented STEM detector, an FEI Super-X quadruple EDS detector, and a Gatan Continuum EELS detector.The monochromator was tuned as required by the desired energy resolution.Mg NPs were cleaned three times after incubating with Raman reporter molecules, 4-MBA or 4-NBT, before being drop-cast on a 10 nm thick Si 3 N 4 membrane for STEM.STEM-EDS and STEM-EELS data were processed using an open-source software, Hyperspy. 105For STEM-EDS, Kα lines of Mg (1.25 keV), O (0.52 keV), S (2.31 keV), and C (0.28 keV) were integrated following the procedure described above.With STEM-EELS, the ZLP was used to align the energy axis with subpixel accuracy, and spikes were removed by linear interpolation.The map of Mg bulk plasmon was obtained by summing the spectra between 9.0 and 11.0 eV.Other energy slices were obtained by summing the spectra over the specified energy range.For NMF analysis, the spectra were cropped to the range of 0.3 to 8.0 eV and minimum intensity was shifted to 0 before extracting modes.The optimum number of NMF components was determined by trial-anderror, with the largest value which did not cause the duplicate factorization of identical components being selected.A Lorentzian line shape was fitted to peaks in NMF spectral factors to determine the energy of the peak. ICP-OES analysis was performed on Thermo Fisher Scientific iCAP 7400 Duo ICP-OES Analyzer.Mg NPs were digested in an aqueous matrix with dilute nitric acid, while Mg−Pd NPs were digested in an aqueous matrix with aqua regia (WARNING!Aqua regia is extremely corrosive and highly oxidizing.Handle with extreme caution and never add organics to aqua regia.).Samples were diluted to ∼1 ppm (mg L −1 ) for analysis. Single particle/aggregate optical dark field spectroscopy was performed on Mg NPs drop-cast on borosilicate coverslips in air.The scattering spectra were obtained using an optical set up equipped with a Nikon Eclipse Ti inverted microscope, a Physik Intrumente P-545.3C7piezoelectric stage, a halogen lamp, a dark field condenser (numerical aperture, NA of 0.85−0.95), a 100× oil immersion objective (Variable NA set to <0.8), a Princeton Instruments IsoPlane 320 spectrometer with a 50 g/mm grating, and a PIXIS 256 detector.The exposure time was set to 1 s with 4 frames accumulated per position. Numerical Methods.Optical scattering spectra were obtained numerically in the discrete dipole approximation (DDA) using DDSCAT. 106EELS calculations were performed using e-DDA, 15,82 a version of DDSCAT modified to replace the plane wave stimulation with a swift electron beam.Input NP shapes for DDA and e-DDA were obtained using the HCP Wulff construction function of Crystal Creator, a freely available crystal shape modeling tool. 41,86The faceted spheroids were modeled using surface energy values from Lautar et al., yielding the shape presented in Figure S15. 107he frequency-dependent refractive index (RI) of metallic Mg was taken from Palik, 108 while the ambient, MgO, IPA and Si 3 N 4 RIs were set to 1, 1.7, 1.3772, and 2.05, respectively.All calculations were carried out with MgO thicknesses as indicated, dipole distances of 1.0 nm, and a 2 nm gap between the NPs for dimers.A 20 nm thick Si 3 N 4 substrate that extended at least 40 nm beyond the edges of the NP was used, when indicated, to account for the effect of the experimental TEM support film present in EELS measurements. Scattering cross sections (C sca ) were calculated as Q sca πα eff 2 , where Q sca is the scattering efficiency taken directly from DDSCAT output, and α eff the radius corresponding to a sphere of equal volume.Nearfield enhancements (|E|/|E 0 |) were extracted from DDSCAT output via Paraview, 109 for points one dipole away from the NP surface.The SERS EF was obtained by calculating (|E|/|E 0 |) 4 at each of these points and subsequently averaging over the NP or NP dimer surface.Fieldenhancement maps were plotted using Paraview such that they include the maximum field enhancement position on the NP surface.EEL probability maps were calculated in 5 nm steps in both directions and plotted using Matlab. Additional SEM images, UV−vis/NIR spectra, STEM-EELS spectra, additional HAADF-STEM images for all NPs, additional STEM-EELS Mg bulk plasmon maps and STEM-EDS elemental maps for Mg NPs, all full Raman and SERS spectra, additional EF calculation tabulations, SERS spectra of Au NPs, ICP-OES results, NMF decomposition of STEM-EELS, eDDA point spectra, DDA scattering cross section plots, additional DDA electric field distribution maps, STEM-EDS line scan of Mg−Pd NPs, additional SERS evolution plots and maps of 4-NBT to DMAB conversion, modeled Mg NP shape, correlated dark field optical scattering spectroscopy, SERS, and SEM of Mg NPs (PDF) Figure 1 . Figure 1.Metallic Mg faceted spheroids have a narrow size distribution and a thin oxide layer on which thiols bind.(A) SEM images and (B) size distribution histogram of as-synthesized Mg NPs.(C) HAADF-STEM image, (D) Mg bulk plasmon (∼10.1 eV) map indicating the distribution of metallic Mg, and (E) STEM-EDS elemental maps of Mg NPs after incubation with 4-MBA.The STEM-EDS maps were collected from the area marked by the white rectangle in the HAADF-STEM image. −C) indicate a change in the analytes' structure due to binding to the MgO surface.No Raman signals from either the membrane filter or MgO itself (Figure S6) are observed in the spectra. Figure 2 . Figure 2. SERS spectra of analytes adsorbed on dry Mg NPs: (A) 4-MBA at 532 nm, (B) 4-MBA at 633 nm, and (C) 4-NBT at 633 nm.Normal Raman spectra of analytes in solid form at the same laser wavelength are shown under each SERS spectrum as a reference.(D) Variation of intensity of the peak at ∼1080 cm −1 (D 6 mode, labeled with a colored asterisk in A−C) across 100 randomly selected regions.Full raw spectra without background subtraction are included in Figures S7 and S8. Figure 2. SERS spectra of analytes adsorbed on dry Mg NPs: (A) 4-MBA at 532 nm, (B) 4-MBA at 633 nm, and (C) 4-NBT at 633 nm.Normal Raman spectra of analytes in solid form at the same laser wavelength are shown under each SERS spectrum as a reference.(D) Variation of intensity of the peak at ∼1080 cm −1 (D 6 mode, labeled with a colored asterisk in A−C) across 100 randomly selected regions.Full raw spectra without background subtraction are included in Figures S7 and S8. Figure 3 . Figure 3. SERS spectra of analyte-incubated colloidal Mg NPs dispersed in IPA.The spectra were collected using 4-MBA at (A) 532 and (B) 633 nm and 4-NBT at (C) 532 and (D) 633 nm.The normal Raman spectra of 0.1 M 4-MBA solution and 0.01 M 4-NBT solution in IPA are shown as a reference, under each SERS spectra.The normal Raman spectra of IPA at (E) 532 and (F) 633 nm are presented as a reference.The spectral features of IPA are plotted in black in all plots while the peaks from analytes are highlighted in color.The 4-MBA concentration in SERS was quantified with ICP-OES (TableS1).The 4-NBT concentration in SERS was estimated from the incubation solution concentration and subsequent cleaning steps.Full raw spectra without background subtraction are reported in Figures S9−S11. Figure 4 . Figure 4. Optical properties of a single Mg faceted spheroid.(A) HAADF-STEM image, (B) STEM-EELS map of the metallic Mg bulk plasmon, (C) STEM-EELS loss probability map at the 2.5 eV dipolar LSPR obtained by integrating the loss signal from 2.45 to 2.55 eV, and (D) STEM-EELS point spectrum at the tip of the NP (binned over 3 × 3 pixels) from the red box in A. 4 - MBA and 4-NBT bind to Mg−Pd NPs during incubation and remain on the surfaces of NPs after cleaning.Indeed, thiols have been demonstrated to bind to Pd surfaces, 89−91 and thus with Mg−Pd, 4-MBA and 4-NBT are bound both to Pd sites and to MgO surfaces.The SERS measurements of Mg−Pd NPs were first performed on dry NPs deposited on membrane filters (FigureS5), as was done with Mg NPs.The SERS spectra of 4-MBA and 4-NBT adsorbed on dry Mg−Pd NPs (Figure6C−F) differ from those of Mg NPs.First, the background in the Mg−Pd SERS spectra of 4-NBT is lower than that with the Mg NPs (FigureS24), likely due to fluorescence quenching from the metallic Pd NPs.Second, the relative peak intensities of the D 6 (1080 cm −1 ) and D 3 (1593 cm −1 ) modes to all other peaks are higher on Mg− Pd than those on Mg NPs.The N−O stretching mode of the 4-NBT anion at 1295 cm −1 is not observed on Mg−Pd NPs, Figure 5 . Figure 5. Optical properties of a Mg NP dimer.(A) HAADF-STEM image, (B) STEM-EELS map of the metallic Mg bulk plasmon, (C) STEM-EELS loss probability map at the 1.6 eV bonding dipolar LSPR obtained by integrating the loss signal from 1.55 to 1.65 eV, and (D) STEM-EELS point spectrum at the tip of the NP (binned over 2 × 2 pixels) from the red box in A. (E) e-DDA loss probability map and (G) DDA electric field distribution map at 2.25 eV of a dimer consisting of two Wulff-constructed Mg NPs with 10 nm of MgO layer, placed 2 nm apart along their facets and positioned on a 20 nm Si 3 N 4 layer.(F) e-DDA point spectrum at the tip of the dimer (red box in A) and (H) DDA scattering cross section (C sca ) of the dimer. Figures 7 , S25−S27) match those from dry Mg−Pd NPs.The 4-MBA SERS peaks in colloidal Mg−Pd NPs are identical to those on dry Mg−Pd NPs at both wavelengths, indicating an equivalent orientation of the molecule on the surface.Similarly, the 4-NBT signals are also equivalent at 633 nm, although the N−O stretching mode (1338 cm −1 ) appears broadened at lower energy in colloids, implying the presence of a N−O stretching mode from 4-NBT − .At 532 nm excitation, the N−O stretching mode of 4-NBT − at 1307 cm −1 is pronounced and has a higher relative intensity compared to that of the neutral species. Figure 6 . Figure 6.Mg−Pd NPs obtained by partial galvanic replacement and their SERS spectra.(A) Schematic of the synthetic approach.(B) HAADF-STEM images of Mg−Pd NPs.SERS spectra using (C) 4-MBA at 532 nm, (D) 4-MBA at 633 nm, and (E) 4-NBT at 633 nm.Normal Raman spectra of analytes in solid form at the same laser wavelength are shown under each SERS spectrum as a reference.(F) Variation of intensity of the peak ∼1080 cm −1 (D 6 mode, labeled with a colored asterisk in A−C), across 100 randomly selected regions.Full raw spectra without background subtraction are reported in Figures S24 and S8. Figure 7 . Figure 7. SERS spectra of analyte-incubated colloidal Mg−Pd NPs dispersed in IPA.(A) 4-MBA and (B) 4-NBT were used at 532 nm.The normal Raman spectra of 0.1 M 4-MBA and 0.01 M 4-NBT solution in IPA, as well as the spectra of IPA are shown as a reference, under the SERS spectra.The spectral features of IPA are plotted in black, while the peaks from analytes are highlighted in color.The 4-MBA concentration in SERS were quantified with ICP-OES (TableS1).The 4-NBT concentration in SERS are estimated from the incubation solution concentration and subsequent cleaning steps.Equivalent SERS data at 633 nm and full raw spectra without background subtraction are reported in Figures S25−S27 and S11. Figure 7. SERS spectra of analyte-incubated colloidal Mg−Pd NPs dispersed in IPA.(A) 4-MBA and (B) 4-NBT were used at 532 nm.The normal Raman spectra of 0.1 M 4-MBA and 0.01 M 4-NBT solution in IPA, as well as the spectra of IPA are shown as a reference, under the SERS spectra.The spectral features of IPA are plotted in black, while the peaks from analytes are highlighted in color.The 4-MBA concentration in SERS were quantified with ICP-OES (TableS1).The 4-NBT concentration in SERS are estimated from the incubation solution concentration and subsequent cleaning steps.Equivalent SERS data at 633 nm and full raw spectra without background subtraction are reported in Figures S25−S27 and S11. Figure 8 . Figure 8. SERS dynamics during the reductive coupling reaction of 4-NBT to DMAB.Spectra were acquired on dry (A−C) Mg and (D−F) Mg− Pd NPs at 532 nm using a 5 s acquisition time with time, t, set as the midpoint of each acquisition.The evolution map of SERS spectra over time at (A) 42.4 and (D) 11.5 μW laser power.The spectra at t i = 2.5 s and t f = 152.5 (t f = 147.5 for Mg−Pd in D) are shown in green and red, respectively.The full raw spectra without background subtraction at t i and t f are presented in Figures S28 and S31.(B, E) The change in peak intensities over time of the N−O stretching modes and the −N=N− stretching mode from 4-NBT − , 4-NBT, and DMAB, respectively.The data points represent the average peak intensities and the colored backgrounds represent their standard deviation, N = 20.(C, F) Ratios of DMAB (1446 cm −1 for Mg and 1447 cm −1 for Mg−Pd) to 4-NBT (1337 cm −1 for Mg and 1334 cm −1 for Mg−Pd) calculated at t f at varying laser power; error bars showing the standard deviation. Table 1 . EFs of Mg and Mg−Pd NPs Calculated Using N Surf Obtained by the Monolayer Approximation Table 2 . Calculated Average Electromagnetic EFs of a Mg NP Monomer and Dimer at 532 nm with Varying Oxide Layer Thickness
2024-07-05T06:17:18.385Z
2024-07-04T00:00:00.000
{ "year": 2024, "sha1": "ee66a05a9a8bdddbe893a97eddd8bf5f3b9e611f", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1021/acsnano.4c06858", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "a6d307f5734fbd4f0bdf0327f75786ed5e06a5e2", "s2fieldsofstudy": [ "Chemistry", "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
10406933
pes2o/s2orc
v3-fos-license
TM6SF2 rs58542926 variant affects postprandial lipoprotein metabolism and glucose homeostasis in NAFLD[S] Mechanisms underlying the opposite effects of transmembrane 6 superfamily member 2 (TM6SF2) rs58542926 C>T polymorphism on liver injury and cardiometabolic risk in nonalcoholic fatty liver disease (NAFLD) are unclear. We assessed the impact of this polymorphism on postprandial lipoprotein metabolism, glucose homeostasis, and nutrient oxidation in NAFLD. Sixty nonobese nondiabetic normolipidemic biopsy-proven NAFLD patients and 60 matched controls genotyped for TM6SF2 C>T polymorphism underwent: indirect calorimetry; an oral fat tolerance test with measurement of plasma lipoprotein subfractions, adipokines, and incretin glucose-dependent insulinotropic polypeptide (GIP); and an oral glucose tolerance test with minimal model analysis of glucose homeostasis. The TM6SF2 T-allele was associated with higher hepatic and adipose insulin resistance, impaired pancreatic β-cell function and incretin effect, and higher muscle insulin sensitivity and whole-body fat oxidation rate. Compared with the TM6SF2 C-allele, the T-allele entailed lower postprandial lipemia and nefaemia, a less atherogenic lipoprotein profile, and a postprandial cholesterol (Chol) redistribution from smaller atherogenic lipoprotein subfractions to larger intestinal and hepatic VLDL1 subfractions. Postprandial plasma VLDL1-Chol response independently predicted the severity of liver histology. In conclusion, the TM6SF2 C>T polymorphism affects nutrient oxidation, glucose homeostasis, and postprandial lipoprotein, adipokine, and GIP responses to fat ingestion independently of fasting values. These differences may contribute to the dual and opposite effect of this polymorphism on liver injury and cardiometabolic risk in NAFLD. linked to the severity of NAFLD in genome-wide association studies (6,7): the TM6SF2 T-allele, encoding the E167K amino acidic substitution, results in reduced transcript levels of its product protein, which is expressed in humans in the liver, intestine, adipose tissue, and pancreatic -cells and has unclear biological function (8,9). The TM6SF2 C>T variant has been linked to a reduced LDL-cholesterol (LDL-C) level and cardiovascular risk and to an increased risk of T2DM (10,11). Mechanisms connecting the TM6SF2 C>T polymorphism to liver injury and cardiometabolic risk are unclear. The impaired hepatic VLDL secretion associated with the TM6SF2 T-allele (8,9) may not be the main mechanism mediating NASH, as enhanced lipid storage into neutral triglycerides (Tgs) protects against liver injury (12). Furthermore, the reduced CVD risk associated with the TM6SF2 T-allele is not fully explained by lower fasting cholesterol (Chol) levels (13). Postprandial lipemia is an emerging cardiometabolic risk factor, independently of fasting lipid levels (14), and dietary fat lipotoxicity has been implicated in liver injury in NASH (3)(4)(5): Hypothesizing that dietary fat lipotoxicity may mediate the impact of TM6SF2 on liver disease and cardiometabolic risk in NAFLD, we assessed the effect of the TM6SF2 C>T variant on postprandial lipoprotein metabolism and on glucose homeostasis in biopsy-proven NAFLD patients and healthy controls. Participants There are no data on the impact of the TM6SF2 C>T variant on postprandial lipoprotein metabolism and glucose homeostasis. Based on available data on the impact of the TM6SF2 C>T variant on fasting lipid levels (6)(7)(8)10) and on the impact of NAFLD on lipoprotein and glucose metabolism (12,15), considering a type I error of 0.05 and a type II error of 0.20, at least 18 T-allele carriers per arm were needed to detect a significant difference in parameters related to lipoprotein metabolism [incremental area under the curve (IAUC) Tg and LDL-C] and glucose homeostasis (whole-body and tissue insulin sensitivity, -cell function) within different TM6SF2 genotypes in NAFLD patients. As obesity, dyslipidemia, and diabetes may modify the effect of the TM6SF2 C>T variant on glucose/lipid metabolism, adipokines, and liver disease, subjects with obesity (BMI 30 kg/m 2 ), diabetes [fasting plasma glucose 126 mg/dl or plasma glucose 200 mg/dl at +2 h on oral glucose tolerance test (OGTT) or antidiabetic drugs], overt dyslipidemia (fasting serum Chol 200 mg/dl or plasma Tg 200 mg/dl), or clinical signs/ symptoms of CVD were excluded. Sixty nonobese nondiabetic normolipidemic biopsy-proven NAFLD patients referred to two hepato-metabolic clinics were included (criteria for diagnosis of NAFLD are detailed in the supplemental Appendix). Each pathological feature of liver biopsy was read by a single pathologist (Renato Parente, HUMANITAS Gradenigo) blinded to the patients' clinical-biochemical characteristics and scored according to the NASH Clinical Research Network criteria; NASH was defined according to current recommendations (1). Sixty randomly identified healthy controls, i.e., nondiabetic nonobese normolipidemic individuals without evidence of CVD, randomly selected from a population-based cohort study, matched for TM6SF2 C>T genotype, age, gender, BMI, and waist circumference were included (12). Criteria to rule out NAFLD in controls are detailed in the supplemental Appendix. Patients and controls were characterized for lifestyle habits, routine biochemistry, adipokine profile, markers of inflammation, and endothelial dysfunction, as detailed below. The homeostatic model assessment of insulin resistance (HOMA-IR) index was calculated as the product of the fasting glucose and insulin concentration divided by 22.5 (16). Participants gave their consent to the study, which was conducted according to the Helsinki Declaration and was approved by the Institutional Review Board of San Giovanni Battista Hospital, Turin, Italy. Genetic analyses. Genotyping for the TM6SF2 rs58542926 C/T SNP utilized the real-time allele discrimination method, using the TaqMan allelic discrimination assay (Applied Biosystems, Foster City, CA). The TaqMan genotyping reaction was run on a 7300HT fast real-time PCR (Applied Biosystems). We also genotyped our population for the PNPLA3 SNP, rs738409 C/G, and for the apoE genotype, which have been previously linked to both NAFLD and lipid metabolism (17), to assess their interference with outcome variables (detailed in the supplemental Appendix). Dietary and physical activity record. Participants filled in the validated European Prospective Investigation into Cancer and Nutrition (EPIC) 7 day alimentary questionnaire and the Minnesota-Leisure-Time-Physical-Activity questionnaire, and data were analyzed as described in the supplemental Appendix. Anthropometry. Percent body fat was estimated by the bioelectrical impedance analysis method (TBF-202; Tanita, Tokyo, Japan), closely correlating with dual X-ray absorption (18). Abdominal visceral fat area (square centimeters) was estimated using Stanforth equations validated against computed tomography in blacks and Caucasians (19). Indirect calorimetry and substrate oxidation rates. After an overnight (12 h) fast, participants underwent indirect calorimetry measurement of oxygen consumption (VO2) and carbon dioxide production (VCO2) using an open circuit indirect calorimeter with a ventilated-hood system (Deltatrac™ II; Datex Instrumentarium Corp., Helsinki, Finland) (see supplemental Appendix). Whole-body respiratory quotient (RQ) and nonproteic RQ (npRQ) were calculated as VCO2/VO2. Resting energy expenditure (REE) and whole-body carbohydrate (CHO) oxidation (CHO ox ) and fat oxidation (Fat ox ) rates were calculated from VO2 and VCO2 by using stoichiometric equations and appropriate energy equivalents (20). REE and substrate oxidation rates were corrected for fat-free mass (FFM). OGTT-derived indexes of glucose homeostasis. Participants underwent a standard 75 g OGTT and indexes of glucose homeostasis were calculated (detailed in the supplemental Appendix). Whole-body oral glucose insulin sensitivity index (OGIS) and hepatic and muscle insulin resistance (IR) indexes were calculated as previously proposed and validated against clamp in nondiabetic subjects (23,24). The adipose tissue IR index was calculated as fasting NEFAs × fasting insulin (15). The minimal model technique was used to calculate the following indexes of -cell function: the insulinogenic index (IGI), the CP-genic index (CGI), and the two integrated indexes of -cell function, the disposition index (DI) and adaptation index (AI), which relate -cell insulin secretion to IR. The DI and AI were previously validated against the frequently sampled intravenous glucose tolerance test in NAFLD and nondiabetic subjects (21,25), and reliably predict T2DM development (26). Incretin effect To assess whether differences in -cell function were related to a reduced incretin stimulatory effect on -cells, a frequently sampled intravenous glucose tolerance test was performed and the incretin effect, i.e., the effectiveness of ingested glucose in stimulating -cell insulin secretion compared with intravenous glucose, was assessed (see the supplemental Appendix). Oral fat tolerance test. Participants underwent a 10 h oral fat tolerance test (OFTT) (14) with measurement of the following parameters (methods detailed in the supplemental Appendix): 1) Plasma total Chol, Tg, NEFA, and HDL-cholesterol (HDL-C). 2) Tg-rich lipoprotein (TRLP) subfractions and LDL. TRLPs were isolated through preparative ultracentrifugation and their total Tg and Chol content was subsequently measured as described in the supplemental Appendix. Two VLDL subfractions with decreasing Sf values (VLDL1: Sf >100; VLDL2: Sf = 20-100) were separated and their Chol and Tg content was determined (see supplemental Appendix). VLDL apoB48 and apoB100 were separated by SDSpolyacrylamide gel electrophoresis using 3.9% gel (detailed in supplemental Appendix). LDL-C content was measured with a standardized homogeneous enzymatic colorimetric method in order to avoid Tg effects on LDL determination (Sentinel) (see supplemental Appendix). 3) Lipid-induced oxidative stress: oxidized LDLs (oxLDLs). LDL conjugated dienes, validated markers of oxLDLs, were determined by capillary electrophoresis (detailed in supplemental Appendix). 4) Glucose-dependent insulinotropic polypeptide (GIP), adiponectin, and resistin. GIP is an emerging modulator of lipid metabolism independently of its incretin effect on pancreatic -cell function. Dietary fat is the most potent stimulator of GIP secretion (27) and TM6SF2 protein is expressed by human intestinal cells (12); furthermore, acute and chronic administration of GIP, but not of glucagon-like peptide-1, reduces Fat ox and energy expenditure (28), induces adipocyte dysfunction and proinflammatory adipokine secretion (29), and promotes development of obesity-associated metabolic disorders (30), including NAFLD, which were all reversed by GIP antagonists (28). Plasma GIP, as well as resistin and adiponectin, which have been linked to both liver disease severity and lipoprotein metabolism in NAFLD, were measured as detailed in the supplemental Appendix. Statistical analysis Differences across groups were analyzed by ANOVA followed by Bonferroni correction when variables were normally distributed; otherwise, the Kruskal-Wallis test, followed by the post hoc Dunn test, was used. Normality was evaluated by the Shapiro-Wilk test. The Fisher or chi-square test was used to compare categorical variables, as appropriate. Hardy-Weinberg equilibrium was assessed using the 2 test. To adjust for multiple comparison testing, the Benjamini-Hochberg false discovery rate correction was applied to raw P values in all comparisons; significance was set at an adjusted P value threshold of 0.05 (31). The area under the curve (AUC) and IAUC of parameters measured during the OFTT and the OGTT were computed by the trapezoid method. Due to the low prevalence of TM6SF2 TT homozygotes and to the overlapping clinical characteristics with heterozygous CT carriers, TM6SF2 TT carriers were combined with CT heterozygotes for group comparisons. Differences were considered statistically significant at P < 0.05. Analysis of dietary, anthropometric, and metabolic parameters and of genetic polymorphisms was made using the Spearman correlation test to assess correlation among different variables. Based on available evidence (6)(7)(8)10), the TM6SF2 C>T variant was modeled as a dominant model of inheritance, that is, quantitative predictor variables reflecting the number of risk alleles (0, 1, or 2). When a relation was found on univariate analysis, multivariate logistic regression was used to identify independent predictors of selected outcome variables of interest, namely: 1) for liver histology, the presence of NASH and of advanced (stage 3) fibrosis; 2) for CVD risk, serum CRP and endothelial adhesion molecules, E-selectin and ICAM-1; 3) for whole-body nutrient oxidation rates, CHO ox and Fat ox ; 4) for glucose homeostasis, OGTT-derived parameters of whole-body/tissue IR and of -cell function; and 5) for postprandial lipid metabolism, the IAUC of Tg, LDL-C, oxLDL, and of the main TRLP subfractions. For this analysis, continuous variables were divided into quartiles and independent predictors of the highest quartile of outcome variables were assessed after log transformation of skewed data. The independent predictors were those variables found to be related to the outcome variables on univariate analysis. Data are expressed as mean ± SEM, unless otherwise specified (STATISTICA software, 5.1; Statsoft Italia, Padua, Italy). Subjects' characteristics The main features of patients and controls grouped according to the TM6SF2 C>T genotype are reported in Table 1. In study participants, the prevalence of TM6SF2 CC homozygotes was 64%, of CT heterozygotes was 34%, and of TT carriers was 2%. The distribution of the TM6SF2 CT genotype was in Hardy-Weinberg equilibrium (6,7,8). NAFLD, as a group, had higher HOMA, serum CRP, and endothelial adhesion molecules, E-selectin and ICAM-1, and lower HDL-C and adiponectin than controls. Within NAFLD patients and controls, TM6SF2 CT/TT carriers showed lower serum CRP and endothelial adhesion molecules than TM6SF2 CC genotype carriers (Table 1). Among NAFLD patients, 42% had NASH and 16% had advanced fibrosis. The TM6SF2 T-allele carriers had more severe liver histology than their counterpart genotype (Table 1). Alimentary record There was no difference in daily total energy, macro- and micro-nutrients, types of fat, and antioxidant vitamin intake between patients with NAFLD and controls and among different TM6SF2 genotypes (not shown). Indirect calorimetry While the TM6SF2 C>T variant did not affect REE, the proportion of energy derived from Fat ox and CHO ox differed between TM6SF2 genotypes: TM6SF2 T-allele carriers had lower RQ and npRQ, indicating that they oxidized more fat and less CHO than CC homozygotes (Table 1). OGTT-derived indexes of glucose homeostasis The time course of plasma glucose and serum insulin during the OGTT is reported in supplemental Fig. S1. In patients and controls, TM6SF2 T-allele carriers showed higher hepatic and adipose IR and enhanced muscle insulin sensitivity compared to CC homozygotes. The TM6SF2 CT/TT genotype also displayed impaired pancreatic -cell function and incretin effect compared to CC homozygotes ( Table 2). OFTT Within patients and controls, the TM6SF2 CT/TT genotype showed lower postprandial Tg, VLDL1-Tg, Data are presented as mean ± SEM, unless otherwise specified. Statistically significant P values are in bold. AVF, abdominal visceral fat area; BP, blood pressure; hs-CRP, highly sensitive CRP; WHR, waist-on-hip ratio; IGR, impaired glucose regulation; METS, metabolic equivalent of activity; Met sy, metabolic syndrome (according to the joint statement of the American Diabetes Association, the International Diabetes Federation, and the National Heart, Lung, and Blood Institute); MTP, microsomal Tg transfer protein; SREBF, sterol regulatory element-binding factor. Met sy requires the presence of three or more of the following criteria: 1) abdominal obesity, waist circumference 102 cm (males) and 88 cm (females); 2) high Tgs, 150 mg/dl (1.7 mmol/l) or on drug treatment for elevated Tgs; 3) low HDL-C, <40 mg/dl (1.0 mmol/l) (males) or <50 mg/dl (1.3 mmol/l) (females) or on drug treatment for reduced HDL-C; 4) hypertension, systolic BP 130 mm Hg and/or diastolic BP 85 mm Hg or on drug treatment; 5) high fasting plasma glucose (FPG): FPG 100 mg/dl (5.6 mmol/l) or on drug treatment for elevated glucose. NEFA, and oxLDL responses, a higher increase in postprandial Chol content in the VLDL1 and VLDL2 subfractions of intestinal and hepatic origin, and a slight, but statistically significant, postprandial LDL-C decrease as compared with the TM6SF2 CC genotype ( Table 3, Fig. 1A-D, supplemental Fig. S2). The TM6SF2 CT/TT genotype also showed lower postprandial GIP and higher resistin responses than homozygous CC carriers (Table 3, Fig. 1F, G). DISCUSSION The main findings of our study are the following: 1) The TM6SF2 C>T variant modulated postprandial lipid metabolism: despite similar fasting lipid levels, TM6SF2 CT/TT carriers showed lower postprandial Tg, NEFA, and oxLDL responses, higher HDL-C levels, and a Chol redistribution from LDL to larger intestinal and hepatic TRLP subfractions. TM6SF2 T-allele carriers also had higher incretin GIP and resistin elevations after fat ingestion. 2) Postprandial plasma VLDL1-Chol elevation independently predicted the severity of liver histology in NAFLD, while Tg and oxLDL responses were independently associated with markers of CVD risk. 3) The TM6SF2 C>T variant affected tissue IR, pancreatic -cell function, and whole-body substrate oxidation rate, the latter possibly through modulation of the GIP response to dietary fat. Postprandial lipemia is an independent cardiometabolic risk factor in the Western world and, consistently, individuals spend most of the day in the postprandial phase rather than in fasting conditions (14). The effect of the TM6SF2 variant on dietary fat metabolism may contribute to the dual and opposite effect of this SNP on liver disease severity and on CVD risk in NAFLD (32): following fat ingestion, TM6SF2 T-allele carriers showed a shift in Chol content from LDL to larger intestinal and hepatic VLDL subfractions, which are preferentially taken-up by liver cells and adipocytes through the LDL receptorrelated protein (33,34) and the VLDL receptor (35), thereby triggering hepatocyte apoptosis and adipocyte dysfunction (33)(34)(35). The independent association of postprandial VLDL-Chol response with liver histology is consistent with recent data, demonstrating an important role for TRLP uptake in promoting high fat-induced liver injury (36) and linking Chol concentration in VLDL subclasses to hepatic Chol content, inflammation, and fibrosis (37). These findings suggest that the TM6SF2 T-allele-associated postprandial lipoprotein pattern may divert toxic Chol away from the vessel walls into the liver and adipose tissue, enhancing liver injury and adipose dysfunction and protecting from CVD. The independent association of CVD risk markers with postprandial Tg and oxLDL responses, which were lower in TM6SF2 T-allele carriers, is also consistent with TABLE 2. OGTT-derived indexes of glucose homeostasis in patients with biopsy-proven NAFLD and controls, grouped according to TM6SF2 rs58542926 C/T genotype (n = 120) an important role for postprandial lipoprotein metabolism in mediating the cardioprotective role of the T-allele observed in large epidemiological studies (7, 10) The lower postprandial Tg response in TM6SF2 T-allele carriers may be due to lower fat absorption or greater chylomicron clearance. The lower increase in NEFA is not consistent with greater chylomicron clearance, which would have increased plasma NEFA through spillover. Additionally, a recent report showed that the TM6SF2 T-allele impairs Tg processing and secretion in enterocytes (38), confirming that reduced Tg absorption may underlie the lower postprandial lipemia observed in TM6SF2 T-allele carriers. If confirmed by larger studies, these findings may have therapeutic implications, as Chol-lowering interventions may reduce Chol hepatotoxicity in TM6SF2 T-allele carriers, irrespective of fasting Chol levels. We also evaluated the impact of the TM6SF2 SNP on glucose homeostasis, as both NAFLD and the TM6SF2 C>T variant have been associated with an increased risk of T2DM (2,11). The TM6SF2 gene variant affected tissue insulin sensitivity and pancreatic -cell function: the TM6SF2 T-allele was associated with an impaired incretin effect and -cell function, possibly via reduced incretin secretion or action on -cells, which express TM6SF2 protein (13). These findings may help to select NAFLD carriers of the TM6SF2 at-risk genotype, who are also at higher risk of T2DM, for targeted preventive interventions improving -cell dysfunction, including incretin mimetics. An intriguing finding was the impact of the TM6SF2 SNP on muscle insulin sensitivity and whole-body Fat ox rates, both effects related to postprandial adiponectin and GIP responses to fat (Table 4). Consistent with our data, adiponectin stimulates muscle Fat ox and insulin sensitivity, while GIP potently reduces energy expenditure and Fat ox (39). The link between TM6SF2 and incretins and the role of GIP antagonism to enhance Fat ox and insulin sensitivity warrant future investigation. In the meantime, it should be noted that the GIP increase induced by dipeptidyl peptidase-IV inhibitors, currently evaluated in NAFLD, may attenuate the benefits of glucagon-like peptide-1 elevation (40). In conclusion, a maladaptive response to a chronic daily repetitive metabolic challenge, like fat ingestion, may link the TM6SF2 C>T variant to liver injury and cardiometabolic disease in NAFLD. Future research should unravel the underlying molecular pathways in different tissues and organs, allowing therapeutic interventions tailored to individual risk profile and mechanism of injury (41)(42)(43). The strength of our study is the careful selection and thorough characterization of participants. The limitations are the small number of subjects and the cross-sectional design, which prevents any causal inference between the TM6SF2 variant and the abnormalities in lipid and glucose metabolism, and requires confirmation by larger follow-up studies. A further caveat is that we did not directly measure hepatic and muscle insulin sensitivity, but rather estimated them from the time course of glucose and insulin during the OGTT. This method assumes a similar intestinal glucose absorption rate across TM6SF2 genotypes, as a faster glucose absorption rate in TM6SF2 T-allele carriers would cause a steeper increase and an earlier peak and fall in plasma glucose regardless of any actual differences in tissue insulin sensitivity. However, the visual inspection of the plasma glucose curve during the OGTT (supplemental Fig. S1) shows a similar slope in the 0-30 min ascending limb of the curve across the TM6SF2 genotypes and the same peak time (+60 min), making differences in glucose absorption very unlikely to occur.
2018-04-03T00:16:07.957Z
2017-02-27T00:00:00.000
{ "year": 2017, "sha1": "39c764f1f34f7b64ed58a85649f242731cda90c9", "oa_license": "CCBY", "oa_url": "http://www.jlr.org/content/58/6/1221.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "f79ceb384feed2516948ae4bb1835ccb7a6dcbdc", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
236644318
pes2o/s2orc
v3-fos-license
Studying the effect of potato products in extending the period of potato storage In this research, the cultivation and storage of potatoes grown in saline soils of Khorezm region at different levels with biopreparations, such as Bist (Pseudomonas putida) and Zamin-M (Bacillus subtilis, Bacillus megaterium and Pseudomonas stutzeri), were studied. In the experimental variants, there was 1.381 mg/kg of vitamin C, followed by 0.112 mg/kg of vitamin B6, 0.089 mg/kg of vitamin PP, 0.481 mg/kg of vitamin B12 and 0.092 mg/kg of vitamin B2 when Zamin-M was applied. Potatoes, which were treated with Zamin-M, contained 0.45% Fructose, 0.87% Glucose, 0.92% Sucrose, 0.04% Maltose, and it was higher than other biopreparations. During storage, the number of diseased nodules was 21.8% when treated with Earth ointment, it was 20% when treated with Bist, and it was 19.8% when treated with Zamin-M. The results showed that Zamin-M was suitable in soil and climate conditions of Khorezm region, especially for storage of potato, and it was found that Zamin-M contained microorganism, which activate the synthesis of phytohormones operating under stress. In comparison, the tested Zamin-M extended the shelf life of potatoes by 25% than the control variants. Introduction More than 290 million tons of potatoes (Solanum tuberosum) are produced worldwide every year. According to FAO 2004FAO -2005 statistics, China, Russia, India, Ukraine and the United States ranked in between 1st and 5th in terms of potato cultivation, respectively. Potato cultivation ranks fourth among agricultural crops after wheat, corn and rice [1,3,4]. Potatoes are an important crop that promises food to millions of people, especially in developing countries. Some tuber diseases, such as dry rot, appear mainly in storage, while others, such as mild rot, affect the potato root at every stage. The main fungal and bacterial diseases affecting the potato crop are considered here by their identification, symptoms on the potato plant or rhizome, the nature of the pathogen, epidemiology, control measures, and other factors [4]. Potatoes can be affected by many diseases that affect the pre-and post-harvest stages of the crop. The main fungal diseases affecting the crop are late blight, early blight, black scurf, fusarium dry rot, wart, powdery mildew and charcoal rot. Such diseases are common in many countries and can lead to a significant decrease in potato production [5,8,9]. Furthermore, salt resistance of plants is a large and multifaceted problem. Salts increase the osmotic pressure of the soil solution, thereby making it difficult for plants to get water (physiological shoreline) and access to water and nutrients (salt depletion). Many salts alter the metabolism and allow plants to accumulate toxic byproducts, which in turn leads to salt poisoning, which affects the accumulation of chlorophyll, resulting in the loss of some parts of the leaf. It becomes incapable of accumulating organic matter [2,5]. Other changes are observed in the plant: a decrease in tissue moisture, loss of growth points, increasing necrosis, and others. As the temperature rises, the toxicity of the salts increases. Compression of plants in saline soils depends not only on the presence of salt in the soil, but also on the composition of salts in them. Salinization of plants with chlorinated salts creates worse conditions for their growth than sulfate salinity, less organic matter accumulates, and plants become much smaller [11,12]. The agro-industrial complex and its basic branch is the leading system of agriculture sector of the Uzbekistan economy. One of the developing directions of agriculture is potato growing. However, there are some barriers in potato growing: the need for producers of imported varieties, inconsistencies in agro-climatic and soil conditions of the country's plans, high level of loss of seed materials, loss of varieties, their economic value, outdated material and technical base, production low renewal of production technology and storage, high level of environmental burden on agroecosystem [6,7,11]. Local manufacturers are abandoning outdated and ineffective technologies and introducing new scientific methods that have been tested. Potatoes were brought to Uzbekistan in the second half of the 19th century. After the adoption of the Resolution of the Cabinet of Ministers of the Republic of Uzbekistan No. 301 of August 30, 1996 "On measures to deepen market relations in potato growing and increase potato production in the country", the gross potato yield was 692,000 tons, potato production was reached 30 kg per capita in 1997, and the import of potatoes was stopped. The presence of a complex of nutrients such as starch, protein, non-protein nitrogen compounds, soluble carbohydrates, minerals and lipids (fatty substances) in the potato pulp ensures its high nutritional value. Potato tuber contain small amounts of pectin, vitamins, alkaloids and other compounds that affect its nutritional and taste quality [10,13,17]. The sugar content of potatoes is mainly in the form of glucose, while sucrose and fructose are present in very small amounts. The high content of sugar in the potato tuber has a negative effect on its taste. Because when the tuber is processed, the sugar dissolves quickly, reacts with substances formed by the breakdown of protein, giving it a dark brown color, and reduces the quality [3,[5][6][7][8][9]. The amount of sugar in the tuber depends on the variety of potato and its degree of maturity. Unripe young tubers contain more sugar than the ripen ones. When stored in a slightly cold place, the sugar content increases. Moreover, it was found that tuber contained more than 200 mineral elements. The ash (mg per 100 g of raw weight) contains potassium -568, phosphorus -58, calcium -15, chlorine-50, and magnesium-45. They contain small amounts of iron, copper, manganese, sodium, cobalt, sulfur, silicon and other similar mineral elements [2,8,9]. Leaves of cherries, currants, sweet peppers and greens are rich in vitamin C. However, if people consume 300 grams of potatoes, they may provide the body with two thirds of the amount of vitamin C required per day, 17.5% of vitamin B1 and 5% of vitamin B2. During the winter storage of tubers, the content of vitamins is reduced by 2-3 times, and peeled potatoes lose 25% of vitamin C, whereas not peeled may lose 20% of vitamin content. It is advisable to apply mineral fertilizers in the right proportions to reduce the darkening of the potato flesh [5,6,9,[15][16][17]. The mechanical composition of the tubers increases the darkening of the potato flesh when grown in heavy soils with high nitrogen and chlorine E3S Web of Conferences 258, 04021 (2021) UESF-2021 https://doi.org/10.1051/e3sconf/202125804021 fertilizers [9,11]. However, it was stated that the country still was faced some issues in the production of highly quality and nutritious potatoes, and the prolongation of the storage period of potatoes. Therefore, this research was aimed at determining ways of increasing productivity and marketability; create environmentally safe and cost-effective production of potatoes by improving the technology of storage and reducing pathogen contamination during storage in Khorezm region. Results and discussion The results of the experiment showed that potatoes contained varieties of vitamins in per 100 grams of raw mass, such as RR-0,04-2,0, V1-0,05-0,12, V2-0,01, V6-0,08-0,22, R-3 pantothenic acid, 0.2-0.3 carotene (vitamin A) -0.05, and small amounts of vitamins K, I, D, E. The amount of vitamin C was 10-30 mg, and even it was up to 50 mg. Evidently, immature young tubers were rich in vitamin C. Table 1 showed that when the drug Zamin-M was used in the variant 1, vitamin C was 1.381 mg/kg, vitamin B6 was 0.112 mg/kg, vitamin PP was 0.089 mg/kg, vitamin B12 was 0.481 mg/kg and vitamin B2 was 0.092 mg/kg, which were higher than other bio-preparations. The lowest indicators in vitamins were in potatoes in the variant 6, where simple bio-preparation was used. Accordingly, vitamin C was 0.922 mg/kg, followed by B6 with 0.067 mg/kg, PP with 0.039 mg/kg and B12 with 0.215 mg/kg. The pertinent results depicted that Earth ointment was dominant in the dry residues, accounted for 25.11%, whereas in higher moisture was observed when Zamin-M was used, which was 79.19%. The lowest results in the residues and moisture were found in variant 5, which were 21.21% and 75.47%, respectively ( Table 2). Table 3 presented that fructose content in potatoes planted using Zamin-M was 0.45%, glucose was 0.87%, sucrose was 0.92% and maltose was 0.04%, and this variant was found to have a higher rate than Bist, Earth ointment, new medicine, and impact of biopreparations on potato crop productivity during storage in 2019-2020 year. Table 3. The effect of potatoes on carbohydrates obtained under the influence of biological preparations. During the storage period, the incidence was 21.8% when treated with Earth ointment, followed by 20% when treated with Bist, and 19.8% when treated with Zamin-M. Consequently, Zamin-M and its different concentrations were used towards prolonging the shelf life of potatoes (Table 4). When studying the effect of different concentrations of Zamin-M biopreparation on the storage period of potato tubers, pre-storage treatment with concentrations in the ratio was 1: 100, 1:500 and 1:1,000, respectively, whereas productivity was 96.7%, 97.3 and 100%, respectively. In the control variants, the lowest productivity was observed, accounted for 93.7%, due to 6.03 % infected tubers (Table 5). Conclusions Khorezm region was suitable for soil climatic conditions during storage of potato crop from Zamin-M preparations, which was explained by the storage of microorganisms that activate the synthesis of phytohormones operating under stress. The drug "Zamin-M" was used to extend the shelf life of potatoes by 25% compared to control options. It was stated that as the salinity of the soil was increased, the dry matter and starch content of the tuber was decreased significantly, and the amount of ash and chlorine increases, especially in the leaves and stems. The reason for the decrease in the amount of starch in the tuber was that chlorine ions had the property of reducing the starch, as well as the flow of the smallest grains of starch from the leaf to the tuber. The number of plants was reduced when the tubers grown in saline soil conditions were planted in fresh and saline soils. The yield of potato varieties grown in saline soils depends not only on their resistance to salt, but mainly on their resistance to early ripening and nausea. The harvesting phase of potatoes planted in early spring coincided with the period when the air temperature rises and salt accumulates in the soils. Therefore, it was advisable to plant the earliest varieties during this period. The growth and development of potatoes planted in the summer coincided with a period of temperature drop, so if high-yielding and disease-resistant late varieties were planted, their tuber formation phase coincides with the cool days of autumn. Experimental variants The amount of infected tubers % Productivity %
2021-08-03T00:06:33.057Z
2021-02-01T00:00:00.000
{ "year": 2021, "sha1": "905b2faeb48ffe4346552625ba7edfa0d16829e5", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2021/34/e3sconf_uesf2021_04021.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "a8a174bf0df82a415b4c3e951e2044570d274ca9", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Chemistry" ] }
18116421
pes2o/s2orc
v3-fos-license
New MR sequences in daily practice: susceptibility weighted imaging. A pictorial essay Background Susceptibility-weighted imaging (SWI) is a relatively new magnetic resonance (MR) technique that exploits the magnetic susceptibility differences of various tissues, such as blood, iron and calcification, as a new source of contrast enhancement. This pictorial review is aimed at illustrating and discussing its main clinical applications. Methods SWI is based on high-resolution, three-dimensional (3D), fully velocity-compensated gradient-echo sequences using both magnitude and phase images. A phase mask obtained from the MR phase images is multiplied with magnitude images in order to increase the visualisation of the smaller veins and other sources of susceptibility effects, which are displayed at best after post-processing of the 3D dataset with the minimal intensity projection (minIP) algorithm. Results SWI is very useful in detecting cerebral microbleeds in ageing and occult low-flow vascular malformations, in characterising brain tumours and degenerative diseases of the brain, and in recognizing calcifications in various pathological conditions. The phase images are especially useful in differentiating between paramagnetic susceptibility effects of blood and diamagnetic effects of calcium. SWI can also be used to evaluate changes in iron content in different neurodegenerative disorders. Conclusion SWI is useful in differentiating and characterising diverse brain disorders. Introduction Susceptibility weighted imaging (SWI) is a relatively new magnetic resonance (MR) technique that provides innovative sources of contrast enhancement visualising the changes in magnetic susceptibility that are caused by different substances like iron, haemorrhage or calcium. The basic concept of this technique is maintaining phase information into the final image, discarding phase artefacts and keeping just the local phase of interest. Sensitivity to susceptibility effects increases, progressing from fast spin-echo (SE) to conventional SE to gradientecho (GE) sequences, from T2-weighting to T2*-weighting, from short to long echo times and from lower to higher field strengths. Before the clinical implementation of SWI, susceptibility imaging relied only on GE sequences. SWI differs significantly from a T2*-weighted GE sequence: it is based on a long echo time (TE) high-resolution, flowcompensated, three-dimensional (3D) GE imaging technique with filtered phase information in each voxel. The combination of magnitude and phase data produces an enhanced contrast magnitude image that is particularly sensitive to haemorrhage, calcium, iron storage and slow venous blood, thus allowing a significant improvement compared with T2* GE sequences. After imaging acquisition, incidental phase variations due to static magnetic field heterogeneities are removed. The phase mask is then multiplied with the magnitude data to enhance the visualisation of vessels or foci with susceptibility effects [1]. SWI is therefore especially helpful in the detection of calcifications and microhaemorrhages, which are both characterised by low signal. The evaluation of the corrected phase images allows the differentiation between the two substances, as calcifications appear bright because of a positive phase shift and haemorrhages appear dark because of a negative phase shift. A supplementary source of information in SWI is primarily associated with the magnetic susceptibility differences between oxygenated and deoxygenated haemoglobin. SWI represents a technical improvement in "high-resolution blood oxygenation level-dependent venography" (HRBV), originally developed by Reichenbach et al. [2], which was based on 3D long TE, flow-independent GE sequences and manipulation of the images with the phase data. The paramagnetic properties of deoxyhaemoglobin [BOLD (blood oxygen level dependent) effect] and the prolonged T2* of venous blood were used as an intrinsic contrast agent, leading to a phase difference between vessels containing deoxygenated blood and surrounding brain Fig. 1 Normal subject. a SWI magnitude image. b SWI, minimal intensity projection (minIP; 10 mm) image: cortical and subependymal veins are well displayed. c SWI, phase map: the distinction between the pars reticulata (arrow) and compacta (asterisk) of the substantia nigra is enhanced compared with conventional fast SE and GE sequences. The outer margins of the red nucleus are also better displayed (see d for comparison). d T2-weighted axial image Fig. 2 Recent onset of altered mental status and cognitive impairment in a 67-year-old man. a Fluid attenuated inversion recovery (FLAIR) MRI: diffuse hyperintensity of the white matter, more pronounced in the right parietal region. b Diffusion-weighted imaging (DWI), apparent diffusion coefficient (ADC) map: the whitematter hyperintensity represents vasogenic oedema. c GE T2*weighted image: small punctate hypointense foci in the right parietal cortex (open arrow). d SWI minIP: increased visualisation of markedly hypointense foci surrounding the white matter abnormalities which correspond to multiple cerebral microbleeds. On the basis of SWI imaging findings, CAA with diffuse inflammatory changes was suspected and steroid therapy was administered. e Follow-up MRI at 3 months showed significant reduction of the vasogenic oedema tissue, resulting in signal intensity cancellation. Thus, deoxyhaemoglobin can behave like a contrast agent with long TEs for differentiating arteries from small veins, which can be as small as 100-200 μm and therefore difficult to detect with conventional MR angiography techniques, such as time of flight (TOF) or phase contrast (PC) [3]. For this reason, the phase-added information that is usually not available in the conventional magnitude image makes SWI well suited for the visualisation of very small vessels such as the caput medusae of venous angiomas and telangiectasias as a result of a combination of slow flow with changes in deoxyhaemoglobin concentration [4]. Latest advances have allowed the technique to be refined, thereby expanding its clinical applicability to brain imaging as a complementary source of information to conventional T1-weighted and T2-weighted imaging sequences. This pictorial essay is aimed at showing the most relevant clinical applications of SWI. Technical aspects At our institution, MR imaging (MRI) is performed by using a 1.5-T system (Magnetom Avanto; Siemens, Erlangen, Germany) with a 12-channel head coil. SWI is obtained with a long-TE, fully flow-compensated 3D GE sequence with the following parameters: repetition time (TR)/TE, 49/40 ms; flip angle, 15°; rectangular field of view (FOV), 7/8; matrix size, 280 × 320; slice thickness, 1.6 mm (80 slices in a single slab matrix size); iPAT factor, 2; acquisition time, 5 min. Images are acquired in the axial plane parallel to the bicommissural line. The SWI sequences are reconstructed with the minimum intensity projection algorithm (minIP) and multiplanar reformation (MPR) techniques to obtain images with thickness (3-10 mm) and position comparable to those of conventional sequences. The minIP algorithm has the characteristic to enhance the visualisation of veins while attenuating the signal from the brain tissue ( Fig. 1). Fig. 3 Patient with long-standing hypertension. a T2-weighted image shows a thin hypointense band in the right thalamus, which represents haemosiderin deposition (arrow). b GE T2*-weighted image shows multiple microbleeds, the largest one in the right thalamus (arrow). c SWI minIP: the number of identifiable microbleeds is increased compared with GE images. They can be recognised bilaterally in the basal ganglia and in the subcortical temporal white matter (arrows) Fig. 4 Alzheimer's disease. a Coronal reformatted T1-weighted MP-RAGE section depicts marked atrophy of the hippocampi. b Coronal reformatted SWI section shows a cerebral microbleed in the right frontal white matter (arrow) SWI sequences have some intrinsic disadvantages, which are mainly represented by artefacts caused by undesirable sources of magnetic susceptibility that occur at air-tissue interfaces, therefore limiting the investigation of areas next to paranasal sinuses and temporal bone. The "blooming artefact", useful in most cases, might also not be needed in some situations, producing normal tissue signal cancellation and loss of anatomical borders. The sequence acquisition time on a 1.5-T system ranges from 5 to 8 min, depending on the spatial resolution and the coverage of the brain needed, leading to an increased incidence of movement artefacts. Imaging at a high field strength has some advantages over 1.5 T in the delineation of even smaller vessels belonging to the venous network, with shorter imaging times because of the higher signal-to-noise ratio, higher spatial resolution and increased susceptibility effects [5]. However susceptibility-based signal loss and severe image distortion caused by air-tissue interfaces or other sources of local field heterogeneity are much more severe at higher SWI has been investigated at 3 T with excellent results, especially concerning brain tumours and detection of cerebral microbleeds [6,7]. Clinical applications Cerebral amyloid angiopathy Sporadic cerebral amyloid angiopathy (CAA) is a common small-vessel disease associated with ageing, dementia and Alzheimer's disease, that can only be diagnosed histopathologically following biopsy or at post-mortem examination. It consists of deposition of amyloid protein within the small and medium-sized cerebral arteries, which is likely responsible for increased vessel fragility and it is one of the major causes of lobar intraparenchymal haemorrhages in the elderly [8]. Computed tomography (CT) and conventional MR techniques are usually unable to show cerebral microbleeds (CMBs), which can be frequently observed on T2*-weighted gradient-echo MRI and have a typical lobar distribution [9]. Recent findings indicate that CMBs in the general elderly population are relatively common and are even more frequently observed in patients with AD [10]. SWI is much more sensitive than GE sequences in demonstrating CMBs in and around the arteriole vessel wall, which appear as black dots better displayed by minIP images (Fig. 2). Histologically, they represent perivascular clusters of haemosiderin-laden macrophages resulting from leakage from cerebral small vessels [11]. Higher field strengths, as expected, can improve CMB detectability, increasing the lesion's contrast enhancement [7]. CAA is also characterised by white matter hyperintensities on conventional MRI sequences, which have been associated with cognitive impairment [12]. Vascular amyloid deposition is believed to be involved in the pathophysiological mechanisms that determine white matter hypoperfusion through either vessel stenosis or vascular dysfunction [13]. CAA should also be suspected in elderly patients with clinical signs of a progressive encephalopathy syndrome with seizures, in which extensive white matter abnormalities are discovered in brain MRI together with multiple CMBs with lobar and subcortical distribution (Fig. 2). It has been recently reported that this combination of MR findings should be interpreted as CAA-related inflammation, which can be treated with steroid therapy with a prompt resolution of the symptoms [14]. In those cases, the demonstration of an APoE ɛ4ɛ4 genotype can definitely support the neuroradiological diagnosis. Hypertensive encephalopathy and especially posterior reversible encephalopathy syndrome (PRES), when the signal abnormalities of the white matter are more symmetrical and located in parieto-occipital regions, should be taken into consideration for differential diagnosis. Hypertensive encephalopathy Hypertensive encephalopathy is characterised by multiple CMBs which are usually silent and can be discovered when the patient is investigated with MRI in order to understand the cause of an intraparenchymal haemorrhage located outside the basal ganglia. SWI is more sensitive to CMBs than T2*-weighted GE sequences in blood pressure (BP)related small vessels disease. CMBs are usually discovered both in deep basal ganglia and subcortical white matter (Fig. 3). Histologically they represent focal accumulations of haemosiderin-containing macrophages in the perivascular space of small brain vessels, indicating previous extravasation of blood, and are often associated with the presence of a symptomatic haemorrhage in the corresponding area. The number of CMBs, which remain detectable for years, is significantly associated with BP levels. Moreover, the presence of deep CMBs can be a . c DWI, b=1,000 image demonstrates a right occipital acute ischaemic lesion. d SWI minIP shows a right thalamic haemorrhagic lesion in the vascular territory of the right deep internal cerebral vein (arrow) useful marker of BP-related small vessels disease, helping in the differential diagnosis with CAA [15]. Neurodegenerative diseases Iron deposition increases in the brain as a function of age, primarily in the form of ferritin and particularly in oligodendrocytes, but also in neurons and microglia. Typical sites of iron deposition include the globus pallidum, substantia nigra, and red and dentate nuclei. Ferritin is paramagnetic and produces strong susceptibility effects on T2*-weighted images. SWI filtered-phase images are particularly suitable for showing increased iron content in the brain ( [16]. Abnormally elevated iron levels are evident in many neurodegenerative disorders, including Parkinson's disease, Alzheimer's disease, Huntington's disease and amyotrophic lateral sclerosis. The ability to measure the amount of ferritin in the brain can be used for a better understanding of the progression of the disease and is also helpful in predicting the treatment outcome. Phase images allow a better distinction between the pars compacta and the pars reticulata of the substantia nigra, which contains iron (Fig. 1). As in patients with idiopathic Parkinson's disease, there is evidence of increased iron in the substantia nigra [17], SWI sequences can be proposed as a useful imaging tool to identify iron deposition as a biomarker for disease progression, although longitudinal studies are required to support the usefulness of this specific application [18]. Accurate localisation of the subthalamic nucleus can also be achieved in the SWI phase maps at 3 T, allowing safe direct targeting for placement of electrodes in the treatment of Parkinson's disease [19]. In 70-98% of patients with Alzheimer's disease, intravascular amyloid deposition is found at autopsy [20] and CMBs are commonly observed. Recent evidence has shown that patients with multiple CMBs have more white matter hyperintensities and perform worse on mini mental state examination compared with patients with no CMBs, despite similar disease duration [21]. A significant correlation has also been reported recently between patients with at least one hypointensity in GRE-T2* imaging and those homozygous for the apolipoprotein E ɛ4ɛ4 gene, a wellknown risk factor for Alzheimer's disease (Fig 4) [22]. Stroke SWI has been demonstrated to be very useful in the acute phase of stroke for several reasons. First of all, it is very sensitive to the presence of CMBs, whose early identification is believed to predict the probability of potential haemorrhagic transformation after thrombolytic treatment [23]. It has also been hypothesised that CMBs can represent a link between cerebral haemorrhage and ischaemia [24]. SWI is also capable of identifying the acute intravascular clot in the main and distal branches of the cerebral arteries [25]. The evaluation of the intravascular content of deoxyhaemoglobin in hypoperfused brain is one of the most interesting applications of SWI. This technique has been proved to be useful for the assessment of tissue viability in patients with hyperacute cerebral ischaemia, as it provides functional information about the ischaemic penumbra. In acute arterial stroke, flow reduction or absence causes an Fig. 11 Incidental discovery of a pontine telangiectasia in a 34-yearold woman investigated for migraine. a T2-weighted image barely shows the abnormality. SWI minIP axial (b) and sagittal reformatted section (c) show a markedly hypointense lesion with regular margins, located in the pons. d Post-contrast sagittal T1-weighted image. The capillary telangiectasia is characterised by mild enhancement with an arbor-like pattern Fig. 12 Ataxia telangiectasia. A 31-year-old woman with a diagnosis of ataxia telangiectasia by the age of 7, investigated for dizziness and headache. a Sagittal T2-weighted image demonstrates severe atrophy of the cerebellar vermis. b GE T2*-weighted image reveals multiple hypointense foci located in both hemispheres. c SWI minIP: additional hypointense black spots can be identified, corresponding to haemosiderin deposits presumably due to gliovascular nodules with perivascular haemorrhage increase in oxygen extraction, leading to a growth in the amount of deoxyhaemoglobin in the hypoperfused brain tissue. Deoxyhaemoglobin represents a supplementary source of contrast enhancement in SWI because of its paramagnetic properties, which determine a reduction in T2* as well as a phase difference between the vessel and its surrounding parenchyma. It appears markedly hypointense on both magnitude image and phase images. As the T1 and T2 properties of blood depend on the oxygen saturation of the blood, there are differences between arterial and venous vessels that cannot be discovered with conventional sequences. The visualisation of draining veins within areas of impaired perfusion allows the identification of penumbral brain tissue in a different way from the current perfusion-weighted imaging (PWI) techniques (Fig. 5) [26]. Cerebral venous sinus thrombosis SWI has become a functional method for evaluating cerebral venous sinus thrombosis (CVST) by demonstrating engorgement of the venous system as a result of venous hypertension and collateral slow flow. Dural sinus thrombosis causes an increase in deoxyhaemoglobin concentration in the veins involved that appears as a prominent area of hypointense signal intensity on SWI; if CVST is treated successfully, this effect will disappear [26]. SWI is also capable of identifying the thrombosed pial vein, which is responsible for the development of oedema into the white matter, due to its high sensitivity in the demonstration of an intravascular clot. SWI can depict at the same time the features of parenchymal or extra-axial haemorrhages that can occur in the case of infarction, with higher sensitivity compared with conventional GE sequences (Fig. 6). Cerebral vascular malformations Cerebral arteriovenous malformations (AVMs) are easily displayed by conventional MRI and MR angiography because of their characteristic high flow, whereas venous malformations such as cavernomas, developmental venous anomalies and capillary telangiectasias cannot be adequately visualised without contrast medium administration and GE sequences, as they mainly consist of slow-flow small vessels. MR angiography techniques are often inadequate for visualising small vessels with slow flow. On the contrary, SWI sequences are well suited for the visualisation of very small vessels, such as the caput medusae of venous angiomas and telangiectasias as a result of a combination of slow flow with changes in deoxyhaemoglobin concentration [5]. The combined information derived from both phase and magnitude images is responsible for enhanced visualisation of such lesions. Fig. 13 Diffuse axonal injury in a 14-year-old boy who had severe TBI following a motorcycle crash. Brain MR study was performed three days after the trauma, when the patient was in a severely comatose state (GCS 5). a FLAIR image identifies multiple hyperintense lesions in the splenium of the corpus callosum and in frontal subcortical and periventricular white matter. b GE T2*-weighted image: haemorrhagic shearing injuries are barely visible in the right fronto-opercular and parieto-occipital regions (arrows). c SWI minIP: additional microhaemorrhages are recognisable at the grey matterwhite matter junction of the frontal lobes and in the right parieto-occipital white matter (black arrows). d SWI minIP, reformatted sagittal section shows microhaemorrhages in the corpus callosum (white arrows) SWI plays a substantial role in the identification and characterisation of cerebral vascular malformations, first of all improving their detection rate. On the other hand, allowing simultaneous visualisation of the different compartments of cerebral AVMs and of the relationship with one another and with the brain parenchyma, it is particularly valuable for therapeutic planning. The clinical usefulness of a similar technique, BOLD MR venography, was first demonstrated by Essig et al. [27]. Phase changes caused by the susceptibility differences between oxygenated and deoxygenated red blood cells and the different relaxation rates between venous and arterial blood provide a natural separation of arteries and veins. The 3D dataset of SWI can therefore be used as an alternative to MR angiography in order to better differentiate afferent arteries from draining veins. This advantage becomes relevant with small AVMs with recent bleeding, in which it is mandatory to identify the nidus in relation to eloquent areas in order to plan the best therapeutic approach, endovascular or surgical or both (Fig. 7). Veins appear dark due to T2* loss and processing of the phase image, while arteries show bright signal from time-of-flight (TOF) inflow enhancement. The use of higher flip angles and shorter TR make it possible to increase the contrast enhancement in the arteries without overly degrading the venography. For the evaluation of high-flow vascular malformations (i.e. AVMs and dural arteriovenous fistulas) the use of both minIP and MIP post-processing techniques can improve the image quality. The identification of small niduses, the exact location of the fistulous point and the depiction of the venous drainage patterns are the most relevant information that should be obtained from this technique. The elevated diagnostic accuracy of SWI compared with digital subtraction angiography (DSA) has been recently reported for the detection of arteriovenous shunting (AVS) in brain AVMs [28]. 14 Characterisation and grading of a glial tumour. A 30-year-old man with dizziness, long-standing behavioural changes and subsequent diagnosis of intra-axial cerebral tumour. a T2-weighted image shows a large frontal infiltrating glioma. b SWI, unprocessed image: absence of intratumoural susceptibility signals (due either to calcifications, haemorrhages or venous vasculature). c Contrast-enhanced SWI: detailed visualisation of the margins of a large anaplastic area (asterisk), which is in good correlation with enhancement character-istics (d), hypervolaemia on PWI relative cerebral blood volume (rCBV) map (e) and MR spectroscopy (f). d Post-contrast T1weighted image. e PWI, rCBV map: area of increased cerebral blood volume (open arrow). f Single-voxel MR spectroscopy, showing inversion of the Cho/NAA ratio and the presence of a lactate peak. Targeted stereotactic biopsy in the supposed necrotic area confirmed the hypothesis of gliomatosis cerebri WHO grade III with several necrotic foci Dural AV fistulas (DAVFs) are usually more difficult to identify than AVMs on conventional MRI, especially when venous ectasia is absent, as in type I and II DAVFs; the patients have vague symptoms (i.e. headache or tinnitus) and there are no localising signs [29]. SWI is capable of better identifying the extent of the abnormal venous drainage compared with MR angiography. The conspicuity of the venous structures on SWI can be explained by the combination of a prolonged cerebral circulation time that favours increased oxygen extraction resulting in increased concentration of deoxyhaemoglobin, with the typical venous engorgement (Fig. 8) [30]. Concerning slow-flow vascular malformations (i.e. cavernomas, developmental venous anomalies and capillary telangiectasias) the phase information contained in SWI is responsible for an artefactual enhancement that dramatically improves MRI sensitivity to these pathological conditions. Without SWI, small vascular malformations could be entirely missed by conventional imaging techniques [31]. Individuals with cavernous malformations can present with epilepsy and focal neurological deficits or acute intracranial haemorrhage, although these vascular malformations are often discovered as incidental findings. Cavernomas that have previously bled are usually detectable on routine MRI because of the prominent signal intensity of haemorrhagic products. Cavernomas may also subsequently develop dystrophic calcifications that can be detected on CT. However, these lesions tend to be barely visible when they have not bled, except for faint enhancement after contrast medium administration that is often nonspecific. Patients with multiple lesions can be diagnosed with familial cavernous angiomatosis and should be referred for genetic evaluation and counselling (Fig. 9). Individuals with symptomatic, growing, or haemorrhagic malformations should be considered for surgical resection. Close follow-up after diagnosis and treatment is helpful to identify lesion progression or recurrence [32]. Developmental venous anomalies (DVA) consist of a radially arranged venous complex converging on a centrally located venous trunk, which drains the normal brain parenchyma. They are better displayed with SWI than with T1-weighted contrast-enhanced images (Fig. 10). Capillary telangiectasias are asymptomatic venous vascular malformations, which are smaller and less common than cavernomas and can sometimes occur in mixed cavernoma/telangiectasia lesions. These may occur sporadically or may be infrequently associated with syndromes, like hereditary haemorrhagic telangiectasia or ataxia telangiectasia. They may also manifest as a result of endothelial injury, such as radiation-induced vascular injury, particularly in children who have received cranial irradiation. They are characterised by poor contrast enhancement and can therefore be missed on conventional T1-weighted and T2-weighted sequences (Fig. 11). Ataxia telangiectasia is a rare autosomal recessive disorder characterised by an early-onset of progressive cerebellar ataxia, immunodeficiency, ocular and cutaneous telangiectasia and is associated with an elevated risk of intracranial tumours. SWI can discover brain gliovascular nodules with perivascular haemorrhage and small haemosiderin deposits presumably only in older patients, during the later stages of the disease; however, in our experience this technique can also be a useful screening tool in young patients (Fig. 12) [33]. Fig. 15 Internal architecture of a high-grade tumour. A 2-year-old child with headache, vomiting and right hemiparesis. a CT demonstrates a large heterogeneous left frontal tumour with calcified foci. b T2-weighted image: the mass is irregularly hyperintense with cystic changes and small hypointensities representing small vessels and calcifications. c Post-contrast T1-weighted image shows patchy enhancement of the lateral part of the lesion and some vascular structures. d CE-SWI minIP: calcifications appear as punctate hypointensities, unchanged after gadolinium injection (white arrows). Post-processing with minIP algorithm allows the simultaneous visualisation of arteries (hyperintense, black arrows) and veins (linear hypointensities, open arrow) around and inside the tumour. CE-SWI is superior to T1-weighted post-gadolinium sequences in demonstrating a diffuse BBB rupture in the lateral necrotic area (bright signal, asterisk). Histopathology revealed a primary neuroectodermal tumour (PNET) with extensive cystic and necrotic changes and increased vascularity Traumatic brain injuries MRI with GE sequences has been extensively used in the past for investigating young patients with traumatic brain injuries (TBI), either in the acute phase, when the clinical picture of a severe coma is not explained by CT, or months after trauma, in order to understand the causes of an unsuccessful recovery. In both cases, diffuse axonal injuries (DAI) can be responsible for the clinical picture. SWI has been widely used in the identification of smaller haemorrhages due to a shear-strain mechanism of injury, thus refining the prediction of outcome [34]. Haemorrhagic DAI lesions on SWI can be six-times better detected than T2*weighted conventional GE sequences and the recognisable volume of haemorrhage is approximately twofold greater (Fig. 13). Reformatted SWI images in the sagittal plane are particularly helpful in the assessment of corpus callosum haemorrhages. Both number and volume of haemorrhagic lesions correlate with neuropsychological deficits [35]. Intracranial tumours Brain MRI predictors of tumour grade include contrast enhancement, oedema, mass effect, cyst formation or necrosis, haemorrhage, metabolic activity and cerebral blood volume. It is well known that the growth of solid tumours, such as gliomas, is dependent on the angiogenesis of pathological vessels. SWI can provide a thorough assessment of the internal angioarchitecture of brain tumours (increased microvascularity inside and beyond the tumour margins), together with the identification of foci of haemorrhage and calcification, thus representing an additional tool in the neuroradiological grading of cerebral neoplasms. The administration of a contrast agent (CE-SWI) allows discrimination among those three entities, as only blood vessels will change their signal intensity, while calcifications and regions of inactive haemorrhage (which can be differentiated from each other by the evaluation of phase images, as described above) will not. The clinical potential of contrast-enhanced BOLD MR venography at 3 T and 1.5 T for the study of brain tumours was first reported by Barth et al. [36], who demonstrated variable venous patterns in various types of tumours and in different parts of the lesions (oedema, contrast-enhancing areas, necrosis), which might represent increased blood supply and particular vascular patterns around fast-growing malignant tumours. Kim et al. [37] recently assessed the added value provided by SWI in the differential diagnosis of solitary enhancing brain lesions compared with the use of conventional MRI alone. CE-SWI has been found to be equivalent or even superior to CE-T1 images in the evaluation of most tumours with necrotic areas (Figs. 14 and 15): the particular contrast combination within SWI images permits the simultaneous visualisation of the information otherwise obtained by a multimodal imaging approach including CT, CE-T1 SE, FLAIR and T2* conventional GE sequences [38]. Pinker et al. [6] demonstrated a correlation between intratumoural susceptibility effects, positron emission tomography (PET) results and histopathological grading. SWI has been proposed in the evaluation of clinical response to anti-angiogenetic drugs and in the differential diagnosis with pseudo-progression after chemo-and radiotherapy [39]. A correlation with MR PWI has also been attempted [26]. However, larger comparative studies of PWI and SWI are still needed to determine a more precise role of the new techniques in the grading of cerebral neoplasms. Conclusion SWI, which is a combination of GE techniques with phase information, represents a useful tool for the identification and characterisation of vascular malformations and for a better understanding of cerebrovascular diseases. Despite some inherent limitations, SWI has increasing indications for neuroradiology and should be included in the routine imaging protocols of trauma and vascular abnormalities. Further investigation is still needed into its extensive clinical application in neurodegenerative diseases and tumoural pathological conditions.
2016-05-12T22:15:10.714Z
2011-03-26T00:00:00.000
{ "year": 2011, "sha1": "b35155d5e8686fa1774ef7a3f8acbf01710e24cc", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s13244-011-0086-3.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "403d5d33d1a4f9f0c0a0866e17782aa35244bc12", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
215737207
pes2o/s2orc
v3-fos-license
An Equivariant Tamagawa Number Formula for Drinfeld Modules and Applications We fix data $(K/F, E)$ consisting of a Galois extension $K/F$ of characteristic $p$ global fields with arbitrary abelian Galois group $G$ and a Drinfeld module $E$ defined over a certain Dedekind subring of $F$. For this data, we define a $G$-equivariant $L$-function $\Theta_{K/F}^E$ and prove an equivariant Tamagawa number formula for certain Euler-completed versions of its special value $\Theta_{K/F}^E(0)$. This generalizes Taelman's class number formula for the value $\zeta_F^E(0)$ of the Goss zeta function $\zeta_F^E$ associated to the pair $(F, E)$. Taelman's result is obtained from our result by setting $K=F$. As a consequence, we prove a perfect Drinfeld module analogue of the classical (number field) refined Brumer--Stark conjecture, relating a certain $G$-Fitting ideal of Taelman's class group $H(E/K)$ to the special value $\Theta_{K/F}^E(0)$ in question. Introduction In [15] Taelman proved a beautiful class number formula for the special value at s = 0 of the C ∞ -valued Goss zeta function ζ E F (s) associated to a characteristic p global field F and a Drinfeld module E defined over a certain Dedekind domain O F ⊆ F . Since Taelman's formula establishes an equality between the special value ζ E F (0) and a quotient of (what we interpret below as) volumes of two compact topological groups canonically associated to the pair (F, E), it can be naturally viewed as a "Tamagawa number formula" for the pair. In this paper we consider a pair (K/F, E), where K/F is a Galois extension of characteristic p global fields of abelian Galois group G and a Drinfeld module E defined over the ring O F . To the pair (K/F, E) we associate a G-equivariant version Θ E K/F (s) of the Goss zeta function, For v ∈ MSpec(O F ) we fix a decomposition group G v ⊆ G F , an inertia group I v ⊆ G v and a Frobenius morphism σ v ∈ G v for v. We let G v , I v and σ v denote their projections via the Galois restriction map G F ։ G. These are the decomposition group, inertia group, and a Frobenius morphism, respectively, associated to v in K/F . Next, we consider a Drinfeld module E of rank r ∈ Z ≥0 , defined on A = F q [t] with values in O F {τ }. We remind the reader that E is given by an F q -algebra morphism ϕ e (t) = t · τ 0 + a 1 · τ 1 + · · · + a r · τ r , where a i ∈ O F and a r = 0. The Drinfeld module E gives a natural functor t * m = ϕ e (t)(m) = t · m + a 1 · τ 1 (m) + · · · + a r · τ r (m). Examples. Natural examples of the correspondence M → E(M) as above are where v ∈ MSpec(O F ) is any maximal ideal of O F and K ∞ is the direct sum of the completions of K with respect to all its valuations extending ∞. Note that F q ((t −1 )) is the completion of F q (t) with respect to ∞. 1.2. The associated Galois representations H 1 v 0 (E, G). For arithmetic data (K/F, E) as above, any v 0 ∈ MSpec(A) and n ∈ Z ≥0 , we let be the usual A v 0 -modules of v n 0 -torsion points and the v 0 -adic Tate module of E, endowed with the obvious A v 0 -linear, continuous G F -actions. Here, A v 0 denotes the v 0 -adic completion of A at v 0 . Since the rank of E is r, we have A v 0 -linear topological isomorphisms If the Drinfeld module E has good reduction at v (i.e. the coefficient a r of ϕ E (t) is a v-adic unit), then the G F -representation T v 0 (E) is unramified at v and the polynomial is independent of v 0 and has coefficients in A. (See [7].) Following Goss [8, §8.6], we let H 1 v 0 (E) := T v 0 (E) * := Hom Av 0 (T v 0 (E), A v 0 ), endowed with the dual G F -action. In analogy with abelian varieties, one should think of H 1 v 0 (E) as the firstétale cohomology group of E with coefficients in A v 0 . Definition 1.2.1. We define the G-equivariant firstétale cohomology groups of E by , v 0 ∈ MSpec(A), endowed with the diagonal G F -action, where G F acts on H 1 v 0 (E) as described above and on A v 0 [G] via the projection G F ։ G given by Galois restriction. Note that we have an isomorphism of v 0 (E, G)} v 0 satisfies the properties listed in the following proposition. Proposition 1.2.2. Let v ∈ MSpec(O F ) such that E has good reduction at v. Let v 0 ∈ MSpec(A), such that v ∤ v 0 . Then the following hold. (1) H 1 v 0 (E, G) is ramified at v if and only if v is ramified in K/F . (2) Assume that v is tamely ramified in K/F . Then H 1 v 0 (E, G) Iv is a finitely generated projective A v 0 [G]-module and we have an equality where e v := 1/|I v | σ∈Iv σ is the idempotent of the trivial character of I v in A[G]. .3) A (3) follows from (2) and a result of Gekeler (see [7,Thm 5.1]) saying that P v (0) and Nv generate the same ideal in A. The following describes a class of modules M as above which will be very relevant for us. Proposition 1.2.5. For data (K/F, E) as above, let v ∈ MSpec(O F ) be a prime which is tamely ramified in K/F . Then the following hold. ( (2) If E has good reduction at v, then Proof. (Sketch) Part (1) is Proposition 7.5.1(1) of the Appendix. We will not prove the equality in part (2) for all Drinfeld modules E here, as the proof is technical and practically irrelevant for the rest of the paper. However, we give a short proof in the case where E := C is the (rank 1) Carlitz module given by ϕ(t) = t + τ , which has good reduction at all primes of MSpec(O F ). In this case, it is not difficult to see that P v (X) = X − Nv. According to Proposition 1.2.2(2) above, we have Now, Proposition 7.5.1 (3) in the Appendix shows that |C(O K /v)| G = (Nv − σ v e v ) and |O K /v| G = Nv, which concludes the proof in this case. Now, for any E we have as the monic polynomials |E(O K /v)| G and |O K /v| G have the same degree [O F /v : F q ]. 1.3. The associated L-functions and their special values. To the data (K/F, E) we associate a class of G-equivariant L-functions, generalizing the Goss zeta function for (F, E) (see [5, §3] for a detailed account of the relation between Goss zeta function and nonequivarient L-values). In what follows, F q ((t −1 )) is viewed as the completion of F q (t) in the valuation at ∞ and C ∞ denotes the completion of an algebraic closure of F q ((t −1 )). For s ∈ C × ∞ × Z p (Goss's space) and f ∈ F q [t] monic, we let f s ∈ C ∞ denote Goss's exponential (see [8, §8.2]). Under Goss's natural embedding Z ⊆ C × ∞ × Z p (see loc.cit.), f n has the usual meaning for all n ∈ Z and f as above. In particular f 0 = 1. Definition 1.3.1. Let (K/F, E) be data as above. Its G-equivariant L-function is given by where the product is taken over all v ∈ MSpec(O F ) which are tamely ramified in K/F and such that E has good reduction at v. Here (C × ∞ × Z p ) + is a certain "half plane" of Goss's space, which contains Z ≥0 . The infinite product above converges on (C × ∞ ×Z p ) + . We will not address these convergence aspects here, as we will be interested only in (a modified version of) the special value Θ E K/F (0). According to Proposition 1.2.5(2) above, this special value is given by 1.4. The associated compact A[G]-modules and their volumes. To the arithmetic data (K/F, E), we associate a class of compact A[G]-modules on which we define a multiplicative measure (volume) with values in F q ((t −1 ))[G] + , the subgroup of monic elements in F q ((t −1 ))[G] × , to be defined in §7.3. Recall that A := F q [t]. As before, K ∞ := K ⊗ Fq(t) F q ((t −1 )) is the product of the completions of K at all primes above ∞, endowed with the usual (product) topology. It is a locally compact F q -algebra, endowed with a natural topological F q ((t −1 ))[G]-module structure. The additive Hilbert theorem 90 shows that one has an isomorphism of (topological) F q ((t −1 ))[G]-modules (1) An A-lattice in K ∞ is a free A-submodule of K ∞ of rank equal to dim Fq((t −1 )) K ∞ , which spans K ∞ as an F q ((t −1 ))-vector space. The uniqueness of exp E implies that the above is in fact a morphism of A[G]-modules. Also, since the preimage exp −1 [15,Prop. 3]), it is easy to see that the preimage exp −1 E (M), is also an A[G]-lattices in K ∞ , for all taming modules M. Consequently, if M is either O K or a taming module and exp E : Here H(E/M) is defined to be the A[G]-module cokernel of the exponential map, i.e. (1.4.6) . where Λ is an A[G]-lattice in K ∞ , K ∞ /Λ is endowed with the usual (quotient) topology and H is a finite A[G]-module. Remark 1.4.10. Note that E(K ∞ )/E(O K ) belongs to the class C if and only if K/F is tame. Also, note that if Λ is a projective A[G]-lattice in K ∞ , then K ∞ /Λ belongs to C. In §4.1 below, we define a lattice index for any two projective A[G]-lattices Λ 1 and Λ 2 in K ∞ . If G is trivial, this recovers Taelman's lattice index defined in [15]. In §4.2 below, we fix an arbitrary free A[G]-lattice Λ 0 in K ∞ and use the lattice index to define a volume function . If G is the trivial group (i.e. K = F ), the above Corollary is precisely Taelman's class number formula [15,Thm. 1]. For a general G of order coprime to p, the above Corollary implies the main result of Angles-Taelman [3]. See Remark 6.1.3 for more details. The main application of Theorem 1.5.1 above included in this paper is the Drinfeld module analogue of the classical refined Brumer-Stark Conjecture for number fields. We remind the reader that this conjecture roughly states that the special value Θ K/F,T (0) of a G-equivariant, Euler-modified, Artin L-function Θ K/F,T : C → C, associated to an abelian extension K/F of number fields of Galois group G, belongs to the Fitting ideal Fitt 0 Z[G] (Cl ∨ K,T ) of the Pontrjagin dual of a certain ray-class group Cl K,T of the top field K. (See [10, §6.1] for a precise statement and conditional proof.) This conjecture has tremendously far reaching applications to the arithmetic of number fields. (See [4] for details.) The Drinfeld module analogue of this conjecture is the following. (See §6.2 for the proof.) Theorem 1.5.5 (refined Brumer-Stark for Drinfeld modules). If M is a taming module for K/F , E is a Drinfeld module of structural morphism ϕ E : In the case p ∤ |G|, the lattice exp −1 1.6. A brief word on proof strategy and techniques. Once we construct and study the various invariants associated to the data (K/F, E) and briefly described in Sections 1.2-1.4 of this introduction, the proofs of the main results stated above rely on G-equivariant versions of Taelman's techniques ( [15]), which we develop in this paper. In particular, we prove a G-equivariant version of Taelman's trace formula (see §3)), which plays a crucial role in obtaining Theorem 1.5.1. The main obstacle for passing from a non-equivariant to a G-equivariant setting is, as expected, lack of cohomological triviality (or lack of finite projective dimension) of the various A[G]-modules at play. Of course, this obstacle would not be present had we assumed that p ∤ |G|, as in [3] and [6], for example. 2. Nuclear operators, the G-equivariant theory 2.1. Generalities. Let R := F q [G] and let V be a topological R-module, which is Rprojective or, equivalently, G-c.t. (See Corollary 7.1.7(1) for the equivalence.) In this section, we develop the theory of nuclear operators and determinants a la Taelman (see [15, §2]) for V as an R-module as opposed to F q -vector space. The main difference between the Rlinear and F q -linear settings is that in the R-linear setting one can only take determinants of endomorphisms of finitely generated, projective R-modules (as opposed to any finite dimensional F q -vector spaces), in the sense of (7.0.1) in the Appendix. In what follows, "endomorphism of V " means a continuous R-module endomorphism of V . Assuming that U exists, we fix it and define everything that follows for the pair (V, U). Independence on U in the definitions and results below will be addressed in §2.2. Definition 2.1.2. Let ϕ be an endomorphism of V . We say that ϕ is locally contracting if there exists an I ∈ Z ≥M , such that ϕ(U i ) ⊆ U i+1 , for all i ≥ I. A neighborhood U := U I of 0 with this property is called a nucleus for ϕ. Remark 2.1.3. If V is a finitely generated R-module, then we always take U i = {0}, for all i ≥ 1. Obviously, every endomorphism of V is locally contracting in this case. The following are clear. For the rest of this section, we assume that V is compact, but not necessarily finitely generated over R. Now, we describe how to take determinants of certain types of endomor- Proof. Say I ≤ J, so U ⊆ W . Then, we have the descending sequence such that ϕ n (U i ) ⊆ U i+1 for all n and i, with 0 ≤ n < N and I ≤ i ≤ J − 1. Then (1 + Φ) induces the identity map on the quotients U i /U i+1 , so we have (The reader has to check that the projective limit above makes sense.) Proof. This follows from Proposition 2.1.5 and the multiplicativity of finite determinants. ] be a nuclear endomorphism, such that ϕ n (V ′ ) ⊆ V ′ , for all n. Proof. Clear from the behaviour of finite determinants in short exact sequences. ]) be such that ϕ is locally contracting and Φ is nuclear with respect to both U and U ′ . where the nuclear determinants det and det ′ are computed with respect to U and U ′ , respectively. Proof. Let N ∈ Z ≥1 . It is easy to see that we can take i and j sufficiently large, such that , and note that ϕ n gives the 0-map when restricted to U i /U ′ j , for all n < N. Consequently, the exact sequence above gives an equality of (regular) determinants . This yields the desired equality of nuclear determinants by taking a limit when N → ∞. For a prime v in F , we let K v := w|v K w be the product of the w-adic completions of K, for all primes w in K sitting above v, endowed with the product of the w-adic topologies. As usual, F v , O v , and m v denote the v-adic completion of F , its ring of integers, and the maximal ideal of that ring, respectively. We denote by S ∞ the set of infinite primes in F and Recall that Corollary 7.2.5 shows that if v ∈ S ∞ and v ∈ S ∞ , respectively, then For all i ≥ 0, we let According to Corollary 7. as the appropriate basis of open neighborhoods of 0 in V . Now, we can define nuclear endomorphisms and take nuclear determinants for the pair (V, U). 2.3.2. V 's arising from general taming modules for K/F . Let M be a taming module for K/F as in Definition 1.3.2, and let S be a finite set of primes of F containing S ∞ . Let In this case, we let , as a consequence of the normal basis theorem and Shapiro's Lemma. We give a basis of G-c.t., open R-submodules of K S , which induces a basis of open R-submodules of V , as described below. For all i ≥ 0, we let As above, Corollary 7.2.5 shows that Proof. Clearly, ϕ is an endomorphism of Corollary 2.3.4. For any M and S as in the lemma, any ϕ ∈ O F,S {τ }τ is a locally con- Proof. Combine the Lemma above with Proposition 2.1.5. Proposition 2.3.5. Let M be a taming module for K/F , and let S be a finite set of primes of F containing S ∞ . Let α, β ∈ O F,S and let ϕ = βτ n for n ≥ 1. Then for any m ∈ Z ≥1 , Proof. We may assume that α, β = 0. Define ϕ α : , U a,S is a nucleus for ϕ, ϕ α ϕ, and ϕϕ α , and We have a commutative diagram of finite R-module morphisms ϕα ϕϕα ϕα ϕαϕ whose horizontal arrows are isomorphisms (as α is invertible in K S .) For a as above, However, since U a,S is a nucleus for ϕ α ϕ, by definition we have Now, from the definition of b it is easy to see that . Consider the following short exact sequence of finite, projective R-modules. Consequently, if we combine the fact that U a+b,S is a nucleus for ϕϕ α (because U a,S is and b > 0) with the short exact sequence above and with (2.3.6) and (2.3.7), we to obtain The following Lemma addresses independence on the chosen taming pair (W, W ∞ ). The G-equivariant trace formula and consequences In this section we prove a trace formula for F q [G]-linear nuclear operators on K S /M S by using the line of reasoning in [15, §3], adapted to our G-equivariant setting. As a consequence, we interpret the special values Θ E,M K/F (0) of the G-equivariant L-functions defined in the introduction as determinants of such a nuclear operators. The notations are as above. Proof. As in the proof of Lemma 1 in [15], we have a sequence of compact, G-c.t. R-modules Since Therefore, they all commute with ψ and η and are local contractions with respect to U v , U ′ and U. (See Corollary 2.3.4.) Consequently, we may apply Proposition 2.1.13 to obtain the following. where the nuclear determinants above are computed with respect to U ′ , U v and U, respec- ]τ Z and v ∈ S, we may take m v M v as a common nucleus for all the coefficients ϕ n of Φ, viewed as a nuclear operator on M v . The last two displayed equalities give the desired result. We show that we have an equality Then, the desired result follows by taking a projective limit, as N → ∞. The set S D,N is a group under multiplication, and (1 + Φ) mod Z N ∈ S D,N . Now, following Taelman, we use a trick of Anderson ([2, Prop 9]). Since O F,T has no residue fields of degree Then for every r ∈ O F,T , and every n < N and d < D, we have Using this congruence it follows that the group S N,D is generated by the set By properties of finite determinants, we have Consequently, Proposition 2.1.12 leads to the equalities which conclude the proof of the Theorem. where the products are taken over all v ∈ MSpec(O F ). Note that we have used the fact that t acts as which concludes the proof. The volume function In this section we define the volume function Vol : 4.1. Indices of projective A[G]-lattices. The first ingredient needed for defining the desired volume function is a notion of an index [Λ : For the moment let us assume that Λ and Λ ′ are both free A[G]-lattices, of bases e := (e 1 , ..., e n ) and e ′ := (e ′ 1 , ..., e ′ n ), where n := [F : F q (t)]. Then, e and e ′ remain F q ((t −1 ))[G]bases for K ∞ (an immediate consequence of Definition 1.4.2). Therefore there exists a unique matrix X ∈ GL n (F q ((t −1 ))[G]), such that (e ′ ) t = X · e t . While the determinant det(X) depends on the choice of e and e ′ , its image det(X) + via the canonical group morphism (see Corollary 7.3.6) obviously does not depend on any choices. From the definition of Fitting ideals, one can easily see that The following Lemma permits us to transition from free to projective A Proof. We prove (1) and leave the proofs of (2) and (3) to the interested reader. Since any two free A[G]-lattices F 1 and F 2 which contain Λ (respectively Λ ′ ) can be embedded into a third free A[G]-lattice F 3 which contains Λ (respectively Λ ′ ), it suffices to prove that We have an obvious exact sequence of finite, G-c.t. A[G]-modules Combined with Lemma 7.4.3, this yields the equality (H, K ∞ /Λ) denote the extension class corresponding to (4.2.1). Now, since K ∞ /Λ is A-divisible, therefore A-injective (because F q ((t −1 ))/A is), π admits a section s in the category of A-modules (not A[G]-modules, in general.) Pick such a section s and note that we have an A-module isomorphism given by (ι, id). To simplify notation, we will drop ι from the notation and will think of it as an inclusion and of the isomorphism above as an equality in what follows. Proof. Let Λ ′ be a free A[G]-lattice satisfying property (1) in the above definition. (See Proposition 7.2.1 in the Appendix for its existence.) We will modify Λ ′ so that it will satisfy property (3) as well. For that, let x ∈ H and g ∈ G, and let (under the G-action on M) g · (0, s(x)) := (a g,x , b g,x ) ∈ (K ∞ /Λ × s(H)) = M. Since the A-module s(H) ≃ H is finite, there exists some f ′ ∈ A \ {0} such that f ′ · s(x) = 0, for all x ∈ H. Then, since the G-action on M commutes with multiplication by elements in A, we find that f ′ a g,x = 0, for all x and g as above. Now, it is easily seen that the free . . , m, such that F q (t)Λ i is independent of i (i.e. the Λ i 's are contained in a common A[G]-lattice Λ), and s i is a fixed section for M i , then the proof of the Proposition above can be easily adapted to show that there is a lattice Λ ′ which is (M i , s i )-admissible, for all i. Also, note that given data (M, s) as above and an admissible (M, s)-lattice Λ ′ we have a short exact sequence of A[G]-modules Consequently, the monic element is well defined, for any admissible (M, s)-lattice Λ ′ . Now, we are ready to define the desired volume function. To make the definition, we first fix a projective A[G]-lattice Λ 0 ⊆ K ∞ , which will be used for normalization. The volume function will depend on Λ 0 , but not in an essential way. The next result shows that Vol is well defined, i.e. is independent of all choices except for Λ 0 , and its dependence of Λ 0 disappears in quotients. Applying Lemma 7.4.3 in the Appendix to the above sequence gives an equality Independence on Λ ′ follows from the equality above combined with Lemma 4.1.5 (2). Now, for two distinct sections s 1 and s 2 , it is easy to see that one can pick a sufficiently large lattice Λ ′ which is both (M, s 1 )-and (M, s 2 )-admissible, and with the additional property that for all x ∈ H, (s 1 (x) − s 2 (x)) ∈ Λ ′ /Λ. It is easily seen that for such Λ ′ , the identity map on M induces an isomorphism of A[G]-modules Therefore |Λ ′ /Λ × s 1 (H)| G = |Λ ′ /Λ × s 2 (H)| G , which proves independence on s. Part (2) is immediate as Λ 0 is K ∞ /Λ 0 -admissible. Part (3) follows by noting that for M 1 , M 2 ∈ C, we have where the notations are the obvious ones. Part (4) is left to the interested reader, as it will not be used in this paper. A G-equivariant volume formula The purpose of this section is to express determinants of certain nuclear operators in the sense of §2 in terms of a quotient of volumes in the sense of §4. Eventually, this will allow us to express our special L-values Θ E,M K/F (0) in terms of volumes, in preparation for proving the ETNF and the Drinfeld module analogue of the refined Brumer-Stark conjecture. 5.1. Maps tangent to the identity. Below, K ∞ is endowed with the sup of the local norms, denoted || · ||, normalized so that t = q. The closed unit ball in K ∞ is denoted O K∞ , as usual. Let M 1 , M 2 ∈ C of structural short exact sequences We endow ι s (t −i O K∞ ) with the norm which makes ι s : t −i O K∞ ≃ ι s (t −i O K∞ ) bijective isometries, for all s = 1, 2, and all i ≥ ℓ. If γ is N-tangent to the identity for all N ≥ 0, γ is called infinitely tangent to the identity. Proof. Let N ≥ 1. We will show that γ is N-tangent to the identity. Since the power series for Γ is everywhere convergent, the coefficients α i must be bounded in norm. Let α := sup i ||α i ||. Thus, if i ≥ ℓ is sufficiently large and z ∈ t −i O K∞ , then we have is an isometry, which is strictly differentiable at 0 and (ι −1 2 • γ • ι 1 ) ′ (0) = 1. By the non-archimedean inverse function theorem (see [11, 2.2]), for all i ≫ ℓ the map (ι −1 which shows that, indeed, γ is N-tangent to the identity. where δ n = (t − γ −1 tγ)t n−1 , for all n ≥ 1. Proof. For simplicity, below we suppress ι 1 and ι 2 from the notations (and think of them as inclusions.) We need to show that each δ n is locally contracting in the sense of 2.1.8, for all n < N. Fix n < N, and fix i ≥ ℓ as in Definition 5.1.2 applied to γ. We will show that δ n (U j,∞ ) ⊆ U j+1,∞ , for all j ≥ i + n. Since −j + n ≤ −i, we obviously have Also, if γ i is as in Definition 5.1.2, then we have equalities of functions defined on t −j O K∞ Consequently, the conditions imposed upon γ i in Definition 5.1.2 imply that In particular, if z ∈ U j,∞ then δ n (z) ∈ t −j−1−a O K∞ , and the inclusions (5.1.1) show that δ n (z) ∈ U j+1,∞ . for all j ≥ i. This shows that U i,∞ is a nucleus for φ. ( (1) For i chosen as above it is easy to check that So, αψ and ψα are (M − 1)-contractions on t −i O K∞ . Therefore, they are locally contracting endomorphisms of V by Remark 5.2.2, and so the nuclear determinants in (2) make sense. (2) The last displayed inequalities, combined with (M − 1) > a and Remark 5.2.2 show that ψ, αψ, and ψα are all locally contracting on V of common nuclei U j,∞ , for all j ≥ i. Now, since γ is an isomorphism and V is t-divisible (because K ∞ is), α is surjective. Therefore α induces and R-module isomorphism Since γ is an isomorphism, the R-modules γ −1 (t −1 U i,∞ ), γ −1 ( 1 t Λ/Λ) and α −1 (U i,∞ ) are all projective (equivalently, G-c.t.) because U i,∞ and Λ are G-c.t. Also, note that Consequently, we have a commutative diagram of morphisms of finite, projective R-modules whose horizontal maps are isomorphisms. This gives an equality of (regular) determinants Now, consider the short exact sequence of projective R-modules Noting that (5.2.5) implies that ψα induces an R-linear endomorphism of the exact sequence above and that ψα ≡ 0 on α −1 (U i,∞ )/α −1 (U i,∞ ) * , the exact sequence above gives Now, since ψα is an (M − 1)-contraction on t −i O K∞ (see proof of part (1)), (5.2.4) leads to the following inclusions Consequently, a short exact sequence argument similar to the one used to prove (5.2.7) above gives the following equalities of (regular) determinants Now, we combine these equalities with (5.2.6) and (5.2.7) to obtain Recalling that U i,∞ and U i+1,∞ are common nuclei for ψα and αψ, this leads to the desired equality of nuclear determinants, which concludes the proof of part (2). Corollary 5.2.9. Let γ : V ≃ V be an R-linear, continuous isomorphism which is (2N)tangent to the identity, for some N ≥ a. Then, we have Proof. We use the main ideas in the proof of Corollary 1 in [15]. Let Z := T −1 . Let α := tγ and ψ := (γ −1 − 1), viewed as a continuous, R-linear endomorphism of V . Then, we have Now, since γ −1 is (2N)-tangent to the identity, ψ is a local (2N + a)-contraction. (See Definition 5.1.2(2).) As in the proof of Cor. 1 [15], one writes where the ψ n 's are uniquely determined polynomials in R{α, ψ} of degree at most n, containing at least one factor of ψ. According to Remark 5.2.8, ψ n is a local (N + a + 1)-contraction on V , for all n < N. Since M := (N + a + 1) > 2a, we may apply Proposition 5.2.3(2) to α, ψ := ψ n and M to conclude that 5.3. Volume interpretation of determinants. The next theorem is motivated by the fact that if γ : This follows immediately from Remark 2.1.11 and the observation that H 1 endowed with the modified t-action t * Remark 5.3.3. Although we believe that the above Theorem holds for general M 1 and M 2 , for the purposes of this paper it is sufficient to prove this result for M 2 of the special type described above. We plan on addressing the general case in an upcoming paper. Proof of Theorem 5.3.2. The proof follows the strategy in §4 of [15]. Below, we use the notations in § §5. Proof. It is straightforward to see that if an R-linear morphism δ : M 2 → M 2 is locally contracting and if an R-linear isomorphism γ : M 1 ≃ M 2 is 2N-tangent to the identity for N ≥ a, then γ −1 δγ : M 1 → M 1 is locally contracting. This is a direct consequence of the definitions and the identity In our context, this observation allows us to write be an R-linear isomorphism, which is 2N-tangent to the identity, for some N > a. Then Proof. Fix an A-linear section s 1 for π 1 . Per Remark 4.2.4, we may assume that Λ is (M 1 , s 1 )admissible. Hence, Λ is A[G]-free and (Λ/Λ 1 ×s 1 (H 1 )) is a finite, projective A[G]-submodule of M 1 . Now, we can pick an R-projective, open submodule U of K ∞ , such that [G]e i , which satisfies all the desired properties. Now, γ gives an R-linear isomorphism where the two direct sums are viewed in the category of topological R-modules (not R[t]modules, as U is not an R[t]-submodule of K ∞ .) We claim that this implies that there exists an R-module isomorphism (not necessarily induced by γ) To prove this, let us pick an i ∈ Z >ℓ sufficiently large, so that γ : for all j. Now, since all modules involved are finite, we must have an equality of cardinalities |(A 1 ) j | = |(A 2 ) j |, for all j. However, the modules (A 1 ) j and (A 2 ) j are R j -projective, therefore R j -free. Hence, since the rings R j are finite, the equality of cardinalities implies an equality of R j -ranks, which in turn gives isomorphisms (A 1 ) j ≃ (A 2 ) j as R j -modules, for all j. Consequently, we have an isomorphism of R-modules A 1 ≃ A 2 , as desired. Fix an isomorphism ξ as above and define the R-module isomorphism Obviously, ρ is infinitely tangent to the identity. Therefore, Lemma 5.3.4 implies that Now, we have a commutative diagram of topological morphisms of modules in class C whose rows are exact and R[t]-linear (see (4.2.5)) and whose vertical maps are R-linear isomorphisms, (2N)-tangent to the identity. This leads to an equality of nuclear determinants However, (5.3.1) combined with the definition of the volume function gives which concludes the proof of the Lemma. ). These facts imply that, if we fix an N > a, we can write -linear isomorphisms whose matrices in the basis e 1 are X, X 0 and B, respectively. We have a commutative diagram of morphisms in the category of compact R[t]-modules, with exact rows and vertical isomorphisms (5.3.8) In other words, the lower exact sequence is the push-out along φ X 0 of the upper one. Now, note that M ′ 1 is an object in class C (the lower exact sequence is its structural exact sequence). Most importantly, note that, since , it is easy to see from the definitions that Λ ′ 1 and Λ 2 are contained in a common A[G]-lattice of K ∞ . Consequently, Lemma 5.3.6 applied to γ • φ −1 gives However, since φ is R[t]-linear, we have φtφ −1 = t, so (1 + ∆ φ −1 ) = 1 on M ′ 1 . Therefore, the above congruence combined with (5.3.5) gives Now, let s ′ 1 := φ • s 1 . Diagram (5.3.8) shows that s ′ 1 is a section of π ′ 1 and that Λ ′ Combined with (5.3.9), this leads to After taking a limit for N → ∞, this concludes the proof of Theorem 5.3.2. The main theorems In this section, we prove the main results of this paper, announced in §1.5. We work with the notations, and under the assumptions in § §1.1-1.5. 6.1. The equivariant Tamagawa number formula for Drinfeld modules. Below, we state and prove the G-equivariant generalization of Taelman's class-number formula [15]. Since γ • ι 1 = exp E and exp E : K ∞ → K ∞ is given by an everywhere convergent, R-linear power series in F ∞ [[z]] of the form exp E = z + a 1 z q + a 2 z q 2 + ..., Proposition 5.1.3 shows that γ is infinitely tangent to the identity. Consequently, Theorem 5.3.2 shows that we have Therefore, the last displayed equality can be rewritten Proof. In this case, the extension K/F is tame, so all taming modules are equal to O K . Also, all A[G]-lattices are G-c.t., therefore A[G]-projective, and the same holds for the A[G]-module H(E/O K ). Therefore, the exact sequence (1.4.5) (with M = O K ) is split in the category of A[G]-modules. So, if s is an A[G]-linear section of π, we have equalities where Λ 0 is the auxiliary A[G]-lattice fixed in Definition (4.2.6) Now, since we have an isomorphism -modules, the desired result follows directly from Theorem 6.1.1 and the equalities above. Remark 6.1.3. As pointed out in the introduction, if G is the trivial group (i.e. K = F ), the above Corollary is precisely Taelman's class number formula [15]. is the extension of F obtained by adjoining the v 0 -torsion points of the Carlitz module C, for some v 0 ∈ MSpec(A), then the above Corollary applies because G is a subgroup of (A/v 0 ) × and therefore of order coprime to p, and it implies the main result of Angles-Taelman in [3]. 6.2. The refined Brumer-Stark conjecture for Drinfeld modules. As an application of Theorem 6.1.1, we prove the Drinfeld module analogue of the classical refined Brumer-Stark Conjecture for number fields. We remind the reader that the classical refined Brumer-Stark Conjecture roughly states that the special value Θ K/F,T (0) of a G-equivariant Artin L-function Θ K/F,T : C → C, associated to an abelian extension K/F of number fields of Galois group G, belongs to the This classical conjecture has tremendously far reaching applications to the arithmetic of number fields, ranging from explicit constructions of Euler Systems and of very general algebraic Hecke characters, to understanding the Z[G]-module structure of the Quillen Kgroups K i (O K ). (See [4] for more details). Its Drinfeld module analogue is the following. Appendix The goal of this Appendix is to develop several tools, mostly of homological nature, needed throughout the paper. In what follows, if S is a commutative ring, Spec(S) and MSpec(S) denote the spectrum (set of prime ideals) and maximal spectrum (set of maximal ideals) of S, respectively. If M is an S-module and ℘ ∈ Spec(S), then M ℘ denotes the localization of M at ℘, viewed as a module over the localization S ℘ of S at ℘. Recall that if M is a finitely generated, projective S-module, then M ℘ is S ℘ -free of finite rank, denoted rk ℘ M. The local rank function rk : Spec(S) → Z ≥0 , ℘ → rk ℘ M, is locally constant in the Zarisky topology of Spec(S) and therefore constant if Spec(S) is connected (i.e. if S has no non-trivial idempotents.) Also, recall that a finitely generated S-module M is projective if and only if M m is S m -projective, for all m ∈ MSpec(S). If S is local, a theorem of Kaplansky states that M is projective if and only if M is free (even if M is not f.g.). See [13] for these facts. If M is a finitely generated, projective S-module, S is Noetherian, and ϕ ∈ End S (M), there exists a unique element det S (ϕ|M) ∈ S (called the determinant of ϕ) which maps into (det S℘ (ϕ ℘ |M ℘ )) ℘∈Spec(S) via the canonical embedding S ֒→ ℘ S ℘ . One can see that where Q is any finitely generated S-module such that (M ⊕ Q) is S-free. (See [9].) If M is a finitely presented S-module, then we denote by If R is a commutative ring and G is an abelian group, then I G denotes the augmentation ideal of the group ring R[G], i.e. the kernel of the R-algebra augmentation morphism s G : R[G] → R, sending g → 1, for all g ∈ G. 7.1. Characteristic p group-rings and their modules. In what follows R is a commutative ring of characteristic p, G is a finite, abelian group, and M is an R[G]-module. Lemma 7.1.1. If G is a p-group, then the following hold. (1) There is a one-to-one correspondence, preserving maximal ideals Proof. (1) First, note that every element of I G is nilpotent. Indeed, if x ∈ I G , then x = σ∈G a σ · (σ − 1), for some a σ ∈ R. Since char(R) = p and G is a p-group, we have It follows that I G is contained in every prime ideal of R[G]. Now, (1) follows since R[G]/R is an integral extension of rings and therefore any prime (maximal) ideal in R[G] contains a unique prime (maximal) ideal in R, plus the obvious isomorphisms of rings R[G]/p G ≃ R/p, for all p as above and R[G]/I G ≃ R. (3) Let p ∈ Spec(R). Note that R p [G] embeds in R[G] p G . To show that the two are equal, suppose that x ∈ (R[G] \ p G ). This means that s G (x) ∈ (R \ p). Since G is a p-group and char(R) = p, we have . The fact that M p = M p G follows similarly. Lemma 7.1.2. Assume that G is a p-group. Assume that R is a DVR and M is finitely generated, or that R is a field and M is arbitrary. Then, the following are equivalent. ( Proof. Since in this case R[G] is a local ring (see (2) of the previous Lemma), (1) and (2) are obviously equivalent. Now, if R is a field, then the equivalence of (2) and (3) is proved similarly to Theorem 6 in Ch. VI §9 of [1]. (In loc.cit. R = F p .) If R is a DVR of maximal ideal m = πR, then the equivalence of (2) and (3) is proved similarly to Theorem 8 in Ch. VI §9 of [1], by replacing Z, p, and F p with R, π, and R/π, respectively. Now, if G is not necessarily a p-group, we let G = P × ∆, where P is the p-Sylow subgroup of G and ∆ its complement. Assume that R is a Dedekind domain. For a character χ : ∆ → Q(R) with values in the separable closure of the field of fractions Q(R) of R, we denote by χ its equivalence class under the equivalence relation χ ∼ σ•χ given by conjugation with elements σ in the absolute Galois group G Q(R) . It is easily seen that the irreducible idempotents of R[G] are indexed by these equivalence classes and are given by Here, ∆(R) denotes the set of all equivalence classes of characters described above. Implicitly, we have picked and fixed representatives χ ∈ χ, for all χ ∈ ∆(R). Consequently, we have ring isomorphisms We let I χ =: ker(s χ ), where s χ is the following composition of R-algebra morphisms Note that these are generalizations of the augmentation ideals I G and maps s G considered earlier. For every p χ ∈ Spec(R(χ)), we let p χ,G := s −1 χ (p χ ). The following are immediate consequences of Lemma 7.1.1. (2) If R is a DVR or a field, then R[G] is a semilocal ring (i.e. a finite direct sum of local rings) of local direct summands R(χ)[P ], for all χ. Proof. This is immediate from the previous Lemmas. Please note that, in this context (where multiplication by p on M is the 0-map), P -coh. triviality is equivalent to G-coh. ( Proof. (1) Hilbert's normal basis theorem asserts that K ≃ F [G], as F [G]-modules. Consequently, as F q ((t −1 ))-modules, part (1) follows. (2) Let V = F q (t)Λ. By the definition of A[G]-lattices in K ∞ and part (1), we have an isomorphism and equality of F ∞ [G]-modules Consequently, V ⊗ Fq(t) F q ((t −1 )) is G-c.t. However, since F q ((t −1 )) is a faithfully flat F q (t)module, for all i ∈ Z and H subgroup of G we have Consequently (again, faithfull flatness), H i (H, V ) = 0, for all i and H. Therefore V is a G-c.t. F q (t)[G]-module. By Corollary 7.1.7(1), V is a projective F q (t)[G]-module. Now, it is easily seen that the local rank function of V over Spec(F q (t)[G]) is the same as the local rank function of V ⊗ Fq(t) F q ((t −1 )) over Spec(F q ((t −1 ))[G]). Therefore this function is constant equal to n. Now, (2) follows form Corollary 7.1.7(3). ( In what follows, for a prime v of F , we let F v and O v be the completion of F at v and its ring of integers, respectively. We use similar notations for primes w of K. If v is a prime in F , we let K v := w|v K w and O Kv := w|v O w , where the products are taken over all the primes w in K sitting above v. We endow these products with the product of the w-adic topologies. Also, τ will denote the q-power Frobenius endomorphism of any F q -algebra. The following is a classical theorem of E. Noether (see [16] and the references therein.) Theorem 7.2.2. Let K/F be a finite Galois extension of Galois group G. Let R be a Dedekind domain whose field of fractions is F and let S be the integral closure of R in K. Then S/R is tamely ramified if and only if S is a projective R[G]-module of constant rank 1. The above result justifies the following definition. (4) For any such M and v ∈ MSpec(R) which is tame in K/F , we have Proof. Let W ⊆ MSpec(R) be the wild ramification locus for S/R. For primes v ∈ MSpec(R), is a DVR and S (v) is its integral closure in K, which happens to be a semilocal PID. Note that, as consequence of Theorem 7.2.2 and Corollary 7.1.7(3), we have isomorphisms of R (v) [G]-modules for all v / ∈ W. Let ω 0 ∈ S be an F [G]-basis for K. Then, we can write Now, we use the character decomposition (7.1.4) for F[G] to obtain an isomorphism with a direct sum of rings indexed with respect to χ ∈ ∆(F) Proof. According to the last definition above, and since it suffices to prove the Proposition in the case where G = P is a p-group, which we assume below. The main ingredient needed in the proof is the following m-adic Weierstrass preparation theorem. (See [12] for a proof.) ] be a power series f = i≥0 a i X i . Assume that n ∈ Z ≥0 is minimal with the property that a n ∈ m. Then f has a unique Weierstrass decomposition Note that the ring (F[P ], I P ) is local (see Lemma 7.1.1.) Since the augmentation ideal I P is nilpotent (see the proof of Lemma 7.1.1), the ring F[P ] is I P -adically complete. Now, let , a n = 0. Since g is a unit, s(g) = i≥n s(a i )t −i is a unit. Therefore there exists a minimal m ∈ Z ≥n such that s(a m ) = 0. This means that m is minimal with the property that a m / ∈ I P . Now, we apply the Weierstrass preparation theorem tog We get a unique Weierstrass decompositioñ Consequently, u = i≥0 u i t −i , with u i ∈ F[P ] and u 0 / ∈ I P . In conclusion, we can write . Consequently, we have written The uniqueness of this writing follows from F((t −1 ))[P ] + ∩ F[t][P ] × = {1}, which is obvious. Corollary 7.3.6. For F, G and t as above, we have a canonical group isomorphism sending the class g of g ∈ F((t −1 ))[G] × to its unique monic representative g + . Proof. Immediate from the group equality in the previous Proposition. (1) If R is local and rank R M = n, then Fitt 0 R[t] (M) is principal and has a unique monic generator |M| R[t] ∈ R[t] + which has degree n and is given by where A t ∈ M n (R) is the matrix of the R-endomorphism of M given by multiplication with t, in any R-basis e of M. (2) If R is semilocal, then Fitt 0 R[t] (M) is principal and has a unique monic generator Proof. (sketch) Obviously, it suffices to prove part (1). This is a simple variation of the proof of Proposition 4.1 of [9]. More precisely, one picks an R-basis e for M and writes the following sequence of R[t]-modules where π t maps bijectively the standard R[t]-basis of R[t] n to e and ρ t has matrix (t·I n −A t ) in the standard basis of R[t] n . As in loc.cit., one proves that this sequence is exact. This yields part (1) Proof. (sketch) Obviously, it suffices to prove the statement when R is local. Fix a section s : C → B for π in the category of R-modules. Pick R-bases a and c for A and C, respectively. Then, note that b := ι(a) ∪ s(c) is an R-basis for B. Now, apply part (1) We take a maximal ideal v in O F and fix a maximal ideal w of O K above v. We let v 0 denote the maximal ideal of A sitting below v and let Nv denote the unique monic generator of v (2) We have an equality |O K /v| G = Nv. (3) If C denotes the F q [t]-Carlitz module defined over O F , then where e v = 1/|I v | σ∈Iv σ is the idempotent in F q [G] corresponding to the trivial character of the inertia group I v and σ v is any Frobenius morphism for v in G. Proof Then we have the following obvious equality for all i as above: (1) This is Proposition 7.2.4(3). (2) We have an obvious isomorphism of A[G]-modules. Since Fitting ideals commute with extension of scalars, this gives equalities Consequently, we have the following equalities, which conclude the proof of part (2). (3) Since we also have an isomorphism of A[G]-modules. we have equalities of ideals and monic elements, respectively. According to Proposition 7.4.1, the definition of C and part (1), we have an equality where M t and M τ are matrices in M nv (A[G v ]) associated to multiplication by t and action by τ on any F q [G v ]-basis of O w /v. Now, from part (2) we already know that So, we need to analyze the matrix M τ . Let K ′ := K Iv be the maximal unramified extension of F inside K. Let w ′ be the prime in O K ′ sitting below w, and let K ′ w ′ and O w ′ be the usual completions. Let {e i } i be the F q [G v ]-basis of O w /v ⊗ Fq F q corresponding via this isomorphism to the standard F q [G v ]-basis (1 in slot i and 0 outside) of F q [G v ] nv . In basis {e i } i , the matrices M t and M τ of the t and τ -actions on (O w /v ⊗ Fq F q ) are given by This concludes the proof of the Proposition.
2020-04-13T01:00:25.155Z
2020-04-10T00:00:00.000
{ "year": 2020, "sha1": "d6a21ff510a4345c0187fdccc9a22303be33c081", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "bc329c6f63724b07c25116eacf2cd1eaca95c531", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
212731504
pes2o/s2orc
v3-fos-license
MISCELLANEOUS l' I i syphilitic or otherwise, and he cites Ehrlich's production, by injecting white mice with arsacetin, a phenomena similar to those of the Japanese dancing mouse, and due to degeneration of the central fibres of the vestibular nerve. Finally, Alexander, speaking as an otologist, recommends caution in using " 606 " incases of acute syphilitic disease of the auditory nerve, and especially in cases of recent syphilis with acute or chronic disease of the auditory nerve, syphilitic or not. Cases of chronic syphilitic affection of the nerve and of chronic labyrinthine dizziness, on the other hand, appear to benefit by the injection. Macleod Years/ey. Eeik, H. 0.-The Effect'of Tobacco on the Ear and Upper Respiratory Tract. "Boston Med. and Surg. Journ.," June 23, 1910. The author begins with a strong indictment of the grossly exaggerated and sometimes even false statements often made as to the evil effects of tobacco and alcohol, and remarks upon the remarkable scarcity of trustworthy literature upon the subject of tobacco and the ear. Enters into the preparation of tobacco and the analysis of tobacco smoke, the weight of evidence going to show that the volume of carbon monoxide contained in tobacco smoke is much more dangerous than the small trace to be found of nicotine. No characteristic lesion of the throat or nose attributable to tobacco has yet been described, nor is there any evidence that smoking causes malignant disease of the throat. The author can find only one definite case of anosmia reported as duo to tobacco, and he considers Wyatt Wingrave's testimony as to tobacco deafness incomplete and inconclusive. Other literature on this point i» reviewed. A good bibliography is appended. Macleod Year shy. MISCELLANEOUS. Arrowsmith, H.-Certain Aspects of Rhino-laryngology and their Eelation to General Medicine. " New York Med. Journ.," December 17, 1910, p. 1209. The author discusses the treatment of impacted foreign bodies, Vincent s angina, and the faucial tonsils, and makes the following propositions concerning the latter: (1) Pure hypertrophy of the faucial tonsil is essentially a phenomenon of early life, and is rather protective than pathological. (2) Its cause is very often disease of the pharyngeal tonsil. (3) That a moderate pure hypertrophy, up to the age of puberty, should be respected. (4) When hypertrophy demands interference, the only justifiable operation is amygdalotomy, and not enucleatiou. (5) Enucleation is justified when disturbances in the tonsillar structure are the source of glandular involvements and a menace to the general health. (6) After puberty, pathological processes in the tonsil demand radical surgical measures. Macleod Yearsley. Med. and Surg. Journ.," February 2, 1911, pp. 144-149. The two articles form part of a series of papers upon reflex disturbances-Bryant states that more than 247 different reflexes and reflex neurotic symptoms have been recorded as emanating from some part of the upper air-tract, due to either local inflammatory or structural conditions, and that each stimulation may travel over at least two distinct nerve routes. As a conservative estimate he computes that 9880 different manifestations mav occur from the upper air-tract (!). The reflex neuroses in general may be divided into two large groups-simple and complex. The former includes all forms of exaggerated reaction to normal or non-pathological stimulation, the distinguishing characteristic being the hyperaesthetic conditions of the part stimulated, or the abnormally intense reaction to a normal stimulus. These complex neuroses include all those symptoms produced by abnormal stimuli, by hypersesthesia of the nerve-endings, and by disease in the nerve-tract and nerve-centres through which the stimulation passes. Simple ieflex neuroses are caused by repeated or prolonged stimulation carried to a point of nervous exhaustion, which produces hyperaesthesia. Complex reflex neuroses depend upon structural and peripheral changes, which cause an abnormal degree of nerve stimulation. Bryant considers the human nose and its nerve supply to be in a state < > f degeneration, and that to this is to be attributed its extraordinary susceptibility to pathological reflex action. A long list is given of IVflexes and reflex symptoms arising from the pharynx. In the second paper Page refers to ear cough, neuralgias and herpes, and other reflex phenomena associated with external, middle and internal oar conditions, without, however, adding to the knowledge already possessed by otologists. Macleod Year si ey. REVIEW. Diseases of the Nose, Throat and Ear-Medical and Surgical. By WILLIAM "LINCOLN BALLENGER, M.D. Third edition, revised and enlarged (506 engravings and 22 plates). London: Henry Kimpton. Glasgow: Alexander Stenhouse, 1911. 28s.net. This is the third edition of Ballenger's imposing and original volume, and shows some amount of revision, although certain blemishes, noted in former editions, still remain. Possibly with the view to preventing an already large volume from becoming too unwieldy, enlargement has taken place in some sections at the expense of others. Thus, the pages upon diseases of the nose and accessory sinuses and those npon the ear have increased by seventeen and thirty-seven respectively, whilst the pharynx and fauces and the larynx have been curtailed by thirty-one and ten. The descriptions of operations are good and, for the most part, clear, <wd more justice is done to other workers than in previous editions, <ilt hough the value of reference to original papers is still unrecognised by the author. Many of the operations described are those devised by Ballenger himself, whose originality and fertility in inventing instruments is remarkable. Full details are given for removing the ethmoid cells en masse, and, despite the somewhat naive remark that it was originally devised " for the purpose of obtaining specimens for examina-
2019-08-17T17:03:46.779Z
1911-07-01T00:00:00.000
{ "year": 2020, "sha1": "209f93a48fd242bb5836a50a20cf0df2d21601d3", "oa_license": "CCBYNCSA", "oa_url": "https://bjo.bmj.com/content/bjophthalmol/30/6/378.full.pdf", "oa_status": "BRONZE", "pdf_src": "Cambridge", "pdf_hash": "8139aeefe5fd999fde63a240188ce9f5982f4cf1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
253439319
pes2o/s2orc
v3-fos-license
Biocatalysts in Synthesis of Microbial Polysaccharides: Properties and Development Trends : Polysaccharides synthesized by microorganisms (bacterial cellulose, dextran, pullulan, xanthan, etc.) have a set of valuable properties, such as being antioxidants, detoxifying, structuring, being biodegradable, etc., which makes them suitable for a variety of applications. Biocatalysts are the key substances used in producing such polysaccharides; therefore, modern research is focused on the composition and properties of biocatalysts. Biocatalysts determine the possible range of renewable raw materials which can be used as substrates for such synthesis, as well as the biochemistry of the process and the rate of molecular transformations. New biocatalysts are being developed for participating in a widening range of stages of raw material processing. The functioning of biocatalysts can be optimized using the following main approaches of synthetic biology: the use of recombinant biocatalysts, the creation of artificial consortia, the combination of nano-and microbiocatalysts, and their immobilization. New biocatalysts can help expand the variety of the polysaccharides’ useful properties. This review presents recent results and achievements in this field of biocatalysis. Introduction Biocatalytic synthesis of polysaccharides (PSs) is one of the promising and topical areas of the development of modern biotechnology [1]. The variety of useful properties (the ability for gelation, the formation of viscous solutions, high adhesive ability, etc.) helps the PSs to find still newer applications in a plethora of fields. These include the medicine, pharmaceutical, food, chemical, textile, and oil and gas industries, as well as the immobilization of cells and enzymes, etc. [2,3]. Many PSs have antitumor, prebiotic, antiviral, anti-inflammatory, antioxidant, immunomodulatory effects, facilitating wound healing and tissue regeneration, eliminating pain syndrome, neutralizing the side effects of medications, and stimulating hematopoiesis [4][5][6]. Currently, PSs of plant origin are actively used in the industry; however, plant-based production is necessarily seasonal and depends on weather conditions. Therefore, the interest in PSs synthesized by biocatalysts (BTCs) in the form of cells of various microorganisms (bacteria, fungi, etc.) taken in a suspended or immobilized state is steadily growing. Microbial PSs are more diverse in composition and properties than those of plant origin. Moreover, via controlling the BTCs' properties and the conditions of biocatalytic processes of biopolymers' synthesis, it is possible to obtain polymers with the desired features and in the required quantities [7]. The microbial PSs are characterized by the presence of a large number of functional groups (hydroxyl, carboxyl, carbonyl, acetate, etc.), which make it possible to modify such biopolymer molecules in order to give them valuable properties [7,8]. In microorganisms considered BTCs and capable of synthesizing PSs, these biopolymers perform a number of diverse functions. These include, in particular, protective, reserve, nutritional, stabilizing ones; besides, PSs determine the immunological properties and virulence of strains, participate in adhesion processes and are responsible for the formation of biofilms. Among the microbial PSs synthesized by various BTCs, intracellular and extracellular biopolymers usually differ in their localization. Intracellular PSs are accumulated inside of BTCs, whereas extracellular PSs, exopolysaccharides (EPSs), are usually secreted into the medium containing the cell and can be separated from BTCs, while the biopolymers themselves can assume the form of capsules, mucus, layers, etc. The interest in EPSs today is mainly due to their unique properties, including those that can benefit mankind, particularly in using EPSs as prebiotics and immunomodulators. It is toward the development of approaches to the biocatalytic production of EPSs and their derivatives that the attention of many researchers is currently directed [9][10][11][12]. Despite the successes achieved in the field of biotechnology of microbial polysaccharides, the number of them produced by industry is extremely limited, and the problem of finding new cost-effective ways to obtain them is still acute. This is largely due to the low yield and high cost of the resulting products. The main ways to reduce costs include using cheap substrates, increasing yields by creating more productive strains using genetic engineering methods, and optimizing cell culture processes. The rate or degree of conversion of a carbohydrate substrate into a polymer product can be increased by improvement of the specific activity of enzymes involved in synthesis and regulating the biosynthesis pathways of EPS precursors. Another problem in the process of EPSs biosynthesis is the change in the rheological properties of the medium at the stage of EPSs formation, which creates difficulties during mixing and difficulties for mass transfer processes. Despite numerous studies and the creation of productive strains, optimal ways have not yet been found that would allow the creation of mutant strains that fully meet the requirements of industrial production. In addition, the use of genetically modified microorganisms on an industrial scale always has a number of significant limitations, primarily related to ecology. There is also a need to introduce expensive inducers into nutrient media for the biosynthesis of necessary enzymes in such cells and antibiotics to suppress native microflora. Undoubtedly, the development of new highly productive stable biocatalysts, providing, among other things, fundamentally new materials, will remove a number of restrictions on the use of EPSs. The range of the most significant PSs obtained microbiologically includes pullulan, dextran, bacterial cellulose (BC), alginate, xanthan, levan, curdlan, succinoglycan, and others. The interest in them is primarily due to the variety of their possible practical applications. BTCs play an important role in EPS synthesis, ensuring the flow of interrelated enzymatic biochemical transformations from the initial substrate to the final product. The range of the substrates that can be successfully used for bioconversion and the characteristics of the products thus obtained depend on the BTCs. The latter also determines the possible biochemical transformations and thus influences the choice of methods and conditions for the synthesis of PSs, as well as the speed of the process and the yield of the target product. The screening of natural BTCs and the construction of recombinant ones with enhanced productivity as well as the optimization of producer cultivation processes are among the most promising approaches available to date for developing novel BTCs. New PSproducing strains with modified or introduced enzymatic activity can be created with genetic engineering methods. These approaches can help expand the range of low-cost substrates available for PS production and increase the degree of their transformation into PSs. Today, it seems very attractive to obtain single-stage nanocomposite materials based on PSs by introducing various additives directly into reaction media with BTCs during the synthesis of PSs or organizing the synthesis process using a combination of several BTCs. To date, the mechanisms of synthesis of individual PSs have been studied and are well known, whereas the knowledge about the mechanisms of intercellular interactions when combining several BTCs for PS synthesis is just beginning to emerge. On the other hand, the accumulated and systemized information concerning individual BTCs is certainly a solid base for understanding the general laws of the functioning of more complex biocatalytic systems with multifunctionality and multiple enzymatic activities. Such complex microbial BTCs are the subject of the most recent research in this area. This review is devoted both to summarizing the information on the currently known BCs used for the synthesis of PSs, and to discussing the prospective development of modern research in this field of biocatalysis. Immobilized Biocatalysts for the Synthesis of Exopolysaccharides There are both prokaryotic cells and eukaryotes that prefer aerobic or anaerobic processes among the biocatalysts synthesizing PSs, and this generally determines the diversity of the biocatalytic processes synthesizing various EPSs. The common trait of all these catalysts in such processes is the increase in the concentration of cells and their transition to a quorum-sensing (QS) state. QS ensures the activation of the synthesis of PSs [13,14] as stabilizing, protective, and reserve substances for highly concentrated microbial populations. Therefore, creating biosystems with a high content of cellular biomass producing PSs and supporting the microbial BTCs in such a concentrated and metabolically active state is one of the nature-like approaches to improving the efficiency of BTCs synthesizing PSs. Cell immobilization can significantly improve the productivity and stability of BTCs ( Figure 1). To date, the mechanisms of synthesis of individual PSs have been studied and are well known, whereas the knowledge about the mechanisms of intercellular interactions when combining several BTCs for PS synthesis is just beginning to emerge. On the other hand, the accumulated and systemized information concerning individual BTCs is certainly a solid base for understanding the general laws of the functioning of more complex biocatalytic systems with multifunctionality and multiple enzymatic activities. Such complex microbial BTCs are the subject of the most recent research in this area. This review is devoted both to summarizing the information on the currently known BCs used for the synthesis of PSs, and to discussing the prospective development of modern research in this field of biocatalysis. Immobilized Biocatalysts for the Synthesis of Exopolysaccharides There are both prokaryotic cells and eukaryotes that prefer aerobic or anaerobic processes among the biocatalysts synthesizing PSs, and this generally determines the diversity of the biocatalytic processes synthesizing various EPSs. The common trait of all these catalysts in such processes is the increase in the concentration of cells and their transition to a quorum-sensing (QS) state. QS ensures the activation of the synthesis of PSs [13,14] as stabilizing, protective, and reserve substances for highly concentrated microbial populations. Therefore, creating biosystems with a high content of cellular biomass producing PSs and supporting the microbial BTCs in such a concentrated and metabolically active state is one of the nature-like approaches to improving the efficiency of BTCs synthesizing PSs. Cell immobilization can significantly improve the productivity and stability of BTCs ( Figure 1). It is known that immobilized cells, being in the QS state, can withstand significantly higher concentrations of toxic substances than free cells. Immobilized cells have a higher period of semi-inactivation and can be stored for a long time without a loss of metabolic activity. The immobilization of the cells leads to a change in their genetic and biochemical It is known that immobilized cells, being in the QS state, can withstand significantly higher concentrations of toxic substances than free cells. Immobilized cells have a higher period of semi-inactivation and can be stored for a long time without a loss of metabolic activity. The immobilization of the cells leads to a change in their genetic and biochemical status, launching various cascade regulatory systems in these cells and the intensification of biochemical processes of basic metabolism. All these factors lead to an increase in the overall productivity, viability, and resistance of these cells [15,16]. This ensures a huge interest in the use of immobilized microbial cells as BTCs for the synthesis of EPSs. For example, Lactobacillus rhamnosus RW-9595M cells immobilized on a solid insoluble carrier (ImmobaSil), which is a silicone polymer, when cultured in a medium containing a serum permeate at a concentration of 5-8 wt.%, were able to synthesize EPS during 4 working cycles with the accumulation of EPSs in concentration of 1.7 g/L [17]. However, this EPS concentration was lower than that accumulated in the medium with free cells (2.35 g/L), which was due to the problems for mass transfer processes created by the carrier used. At the same time, a high concentration of immobilized cells (8.5 × 10 11 cells/g of carrier) led to an increase in EPS productivity (250 mg/L/h), which was almost 2.5 times higher than in the case of free cells (110 mg/L/h). The continuous process of EPS production, organized using the same immobilized L. rhamnosus RW-9595 M cells [18], revealed morphological and physiological changes in the cells, leading to the formation of very large aggregates consisting of cells and EPS themselves, which reduced the level of accumulation of the latter (0.138 g/L). Therefore, the synthesized PS should be removed from the cells to provoke them for further synthesis. Another study of EPS production in various media with BTCs in the form of immobilized Lactobacillus delbrueckii cells subsp. bulgaricus using Ca-alginate, k-carrageenan, and a number of other carriers was performed [19]. This study showed that the maximum concentration of EPS can be obtained by culturing cells immobilized in Ca-alginate gel in an Elliker nutrient medium with the addition of sucrose (5 wt.%). The process duration was 18 h at 37 • C and pH 5.5. The productivity of such cells exceeded that of free cells by 46%. Immobilized BTCs were also used in low-fat cheese production technology, and it was shown that the maximum amount of EPSs (5.7 mg/g of cheese) was formed after 22 days of the process. A study of EPS production using L. plantarum MK O2 bacterial cells immobilized in agar and alginate gels [20] showed a EPS yield of 225 mg/L, which was only slightly higher than that in the case of free cells. On the other hand, another study noted an almost fivefold increase in EPS synthesis (0.9 ± 0.1 g/L) by L. plantarum MTCC 9510 cells immobilized in Ca-alginate gel when the cells were cultured in a medium containing 40 g/L of lactose for 72 h [21]. Such an increase in the synthesis of EPS by immobilized cells was ascribed to an increase in the cell density per unit volume, as well as to the separation of cells from EPS caused by the carrier. Until recently, the possibility of BTC immobilization in gel matrices was practically not considered as an acceptable option. It was supposed that such methods could work only in the case of both the substrate and the synthesis product having a low molecular weight [22]. The production of such high-molecular substances as BC by cells immobilized in polymer gels was considered impossible. It was assumed that BC synthesized inside the granules of the gel carrier should block its own secretion into the medium. However, relatively recently, the possibility of efficiently using Acetobacter xylinum cells immobilized in Ca-alginate gel for producing a food product based on a BC layer was demonstrated [23]. Moreover, the immobilized cells were found to be capable of being reused in batch mode. The duration of one working cycle was 264 h with an average thickness of the formed BC layer of 0.8 cm. After two working cycles, the viability of the immobilized cells was still high enough. However, Ca-alginate gels are known to have relatively low mechanical strength, which depends on the pH of the reaction medium and the ionic strength of the solutions in which the BTCs are functioning. In some cases, the carrier gels can be destroyed due to cell growth inside the gel matrixes. The metabolic activity of the cells was also found to reduce the operational life of such carriers and, consequently, that of the BTCs [24]. Poly(vinyl alcohol) (PVA) cryogels have high mechanical and thermal resistance, a rigid macroporous structure with variable pore size, and chemical stability in various environments. They have long been successfully used for the immobilization of various microorganisms and their use in environments with a complex chemical composition, under various conditions, pH values and buffering of the medium, and thus can serve as an alternative to alginate gels [25][26][27][28][29][30][31]. Thus, bacterial Komagataeibacter xylinum B-12429 cells immobilized in PVA cryogel easily synthesized and "pushed" the formed BC filaments through the pores of the polymer carrier, which eventually merged into a dense gel film without covering the cells. The latter, thus, were deprived of the possibility of transition to a state of rest, and the synthesis of BC by the cells even became steadily more active [32]. When K. xylinum B-12429 cells immobilized in PVA cryogel were cultivated in a medium containing 20 g/L of glucose, the mass of the synthesized BC was 1.6 times higher than that obtained in a suspension culture. Glucose was completely consumed by immobilized cells in 120 h of cultivation. The accumulation of free cells in the medium during the cultivation of the immobilized BTCs was six times less than the concentration of cells in the suspension culture. A more dramatic decrease in pH was also observed in the environment with immobilized cells during the first 70 h of cultivation, which indicated a more intensive formation of metabolites reducing the pH of the medium. The above-mentioned role of QS in PS synthesis was also confirmed by the following experiment. An increase in the concentration of the producer cells in the composition of immobilized BTCs (from 20 to 40 g dry cell/kg) led to a noticeable increase in the accumulation of BC in the reaction medium [32]. Note that the specific form of BTCs used in the process (granules or layers) had virtually no effect on the level of BC accumulation. The possibility of reuse of immobilized cells that retain 100% of their metabolic activity for at least 10 working cycles was demonstrated both in media containing pure glucose and those with Jerusalem artichoke hydrolysates. The study of the BC samples synthesized by K. xylinum B-12429 cells in free and immobilized form under identical conditions showed that polysaccharide films produced by immobilized BTCs had a greater tensile strength, 30% greater thickness, and a higher degree of polymerization. Although only a few studies of the use of immobilized cells for PSs biosynthesis have been performed, the high efficiency of such an approach has been proven beyond doubt. The immobilized cells in most cases were in the state of highly concentrated populations and had therefore significantly higher metabolic rates, thus ensuring a higher yield of many target PSs compared to the free cells. The possibility of their long-term functioning (Table 1) is yet another advantage of the BTCs' immobilization, allowing an essential increase in the overall efficiency of the BC production. The approach based on the use of immobilized cells makes it possible to obtain PSs from various types of renewable non-food raw materials and biomass, providing an essential advantage over the free cell case in terms of both the process efficiency and the characteristics of the produced PSs (Table 1). Coconut shells or cocoa husks hydrolysates-20.0 g/L (25.0 g/L of sugars) Xanthan-3.6 g/L (coconut shells) Xanthan-4.5 g/L (cocoa husks) Substrate Specificity of BTCs and Main Product (PS) Conditions for BTCs' Use Rate of PS Synthesis A. pullulans Y-4137; immobilized in PVA cryogel [15] Hydrolysate New Natural, Mutated and Genetically Modified Strains as BTCs for PS Synthesis Following the initial success in the use of immobilized cells for EPS biosynthesis, research continues for finding ways of controlling the biocatalytic process and for improving the characteristics of BTCs themselves. The latter is achieved via both searching for new producer strains among various microbial cells [69][70][71][72][73] or by producing mutant strains [46,47,60,68,83,87,89] (Table 1, Figure 1). Note that great success has already been achieved in these studies, and the prospects for further development are feasible, especially in the joint use of the advantages of new strains and cell immobilization in concentrated forms. For example, new strains producing pullulan (Aureobasidium melanogenum TN1-2 (from natural honey), Rhodosporidium paludigenum PUPPY-06 (from fresh and rotting plant leaves), Aureobasidium melanogenum A4 (from soil)) and BC (Lactobacillus hilgardii IITRKH159 (from sapodilla), strains of Komagataeibacter maltaceti (from grape and apple cider vinegar) and Komagataeibacter nataicola (from sloe apple cider vinegar)) were recently isolated and studied [69][70][71][72][73]. It was found that the high efficiency of pullulan biosynthesis (0.86 g/L/h) in A. melanogenum TN1-2 is due to the high activity of such enzymes as glucosyltransferase and phosphofructo-2-kinase. The rate and degree of conversion of carbohydrate-containing substrates into a polymer product can be improved by increasing the specific activity of enzymes involved in PSs synthesis and by regulating the biosynthesis pathways of the EPS precursors. It was found, however, that the overexpression of genes of enzymes involved in the synthesis of EPSs can have a negative impact as well as a positive one. The former can be due to the distortion of the multi-domain protein complex responsible for polymerization and polymer secretion [90]. The main directions of research aimed at improving the characteristics of recombinant BTCs are the following: inducing the EPS-producing cells to increase the synthesis of enzymes responsible for the key reactions producing the target polymers [83,87]; and transformation of the cells by plasmids carrying genes that increase the sustainable functioning of cellular BTCs under PSs synthesis conditions [76]. Note that, despite numerous studies aimed at creating productive strains, none of them have led so far to the development of mutant strains that fully meet the requirements for large-scale biotechnological production. The available inducers required for the biosynthesis of the necessary recombinant enzymes in the nutrient media are still too expensive for industrial-scale application. Antibiotics, which are generally expensive, are needed to maintain the conditions for the selective cultivation of the genetically modified PSs producers. Therefore, in addition to creating BTCs using various genetic constructs, many studies have been made for developing natural BTCs for the biosynthesis of EPSs. BTCs as Native or Artificial (Co-Cultured) Consortia for EPS Production The use of natural and synthetic consortia is one of the most popular approaches to the development of BTCs for EPS synthesis (Table 2, Figure 1). Various native consortia composed of different cells including bacteria and fungi (yeasts) are widely known as Kombucha and applied for producing BC with high enough crystallinity [91][92][93]. The native symbiotic consortia of bacteria and yeast cells (SCOBY), however, can ensure only a relatively low rate of BC synthesis compared to bacterial monocultures producing BC [91][92][93]. There are several reasons for this: the concentration of the producing cells in such consortia is less than that in pure cultures; additionally, the degradation of substrates and accumulation of metabolites by BC producers occur in competition with other cells in the consortium. Kombucha consortia require media with a very simple composition, namely black tea or herbal extracts with the addition of sucrose (100 g/L), for synthesizing BC, which sets them at an advantage compared to other BC-producing BTCs. It was found that the plant polyphenols present in these media impart antioxidant activity with respect to free radicals to the produced BC biopolymers, which expands the prospects for their use. It has been shown that nitrogen-rich components of the media (peptone or green tea) contribute to an increase in the efficiency of polymer synthesis with SCOBY [92] ( Table 2). The highest rate of BC synthesis was recorded in the joint presence of Brettanomyces bruxellensis MH393498 and Brettanomyces anomalus KY103303 yeast cells, as well as Komagataeibacter saccharivorans LN886705 bacteria in the Kombucha consortium [91] (Table 2). In the medium with an optimized composition containing 1% black tea and 6% glucose at a pH 6 and 30 • C for 10 days under conditions of static cultivation, the Kombucha consortium produced 0.31 g BC/g glucose. This is 82% higher than in the case of using individual genetically modified BC producers [76,91] (Tables 1 and 2). PSs production by synthetic consortia relies on the efficient joint functioning of various microorganisms. Controlling such interaction is a promising direction for the rapid production of PSs with improved characteristics. So, it is known that the biological activ-ity of curdlan increases with a decrease in its molecular weight, and the low-molecular curdlan is suitable for a variety of applications in the food industry and the agricultural sector. During the synthesis, the accumulated curdlan covers the cell surface of the main producer (Agrobacterium sp.) and has a negative effect on the mass transfer and aerobic respiration of the cells, thus inhibiting the PS synthesis [94]. β-1,3-endoglucanase from Trichoderma harzianum GIM 3.442 can cleave β-bonds at random sites of the PS chain and exhibits high activity and specificity for the hydrolysis of curdlan. Thus, an efficient BTC can be formed by combining Agrobacterium sp. ATCC 31749 and Trichoderma harzianum GIM 3442 cells, which are producers of curdlan and β-1,3-endoglucanase, respectively [94] ( Table 2). By using this synthetic consortium, the molecular weight of the curdlan was reduced by 34.01% (from 110.85 kDa to 73.15 kDa), and the curdlan yield (47.9 g/L) and the conversion efficiency of glucose to curdlan (0.60 g/g) increased by 119.93% and 36.36%, respectively [94]. Molasses-300 g/L Fructo-oligosaccharides-169.5 g/L pH 5.5, 55 • C, 1 h 169.5 g/L/h * Parameter was estimated by the authors of the review based on the data in the corresponding publications or taken from the references. Kefiran is a well-known EPS, which is used as a thickener, stabilizer, emulsifier, fat substitute, gelling agent, etc. When the rate of lactate production of L. kefiranofaciens exceeds that of lactate consumption by S. cerevisiae yeast contained in the kefir starter culture, the rate of kefiran synthesis decreases. The presence of lactate inhibits the growth of lactic acid bacteria, even if the pH of the medium is regulated by the addition of alkali. Therefore, to increase the productivity of BTCs, lactate should be removed from the reaction medium. To optimize the synthesis of kefiran, the conditions of joint cultivation of kefiran-producing lactic acid bacteria Lactobacillus kefiranofaciens with lactate-assimilating yeast Saccharomyces cerevisiae were controlled as described below (Table 2). When the pH decreased to 4.95 due to the accumulation of lactate produced by L. kefiranofaciens, the reaction medium was aerated to allow S.cerevisiae yeast to consume a part of the lactate. As soon as the pH reached the value of 5.05, the reaction system was transferred to anaerobic conditions due to bubbling N 2 . Fructo-oligosaccharides (FOS) function as prebiotics and improve mineral absorption in the large intestine, enhance immunity, and reduce total cholesterol. Due to the sequential combination of two catalysts, molasses could be successfully used as a substrate for the production of FOS [77] ( Table 2). Incubation of sugarcane molasses with an invertase-free Saccharomyces cerevisiae prior to the addition of A. pullulans as BTCs eliminated the inhibition of the PS producer by glucose and increased FOS accumulation from 44% to 56% in 1 h. This study demonstrated that various approaches can be used for controlling the interactions in cell combinations in order to optimize the production of EPSs from various renewable resources and waste. Features of BTCs Determine the Palette of Substrates (Types of Biomass and Methods of Its Pretreatment) for EPS Synthesis The choice of BTCs determines the variety of substrates and their possible concentrations for conversion to EPS. Most microbial cells used as BTCs are capable of converting simple sugars (mono-and disaccharides) into PS. These sugars can be obtained by hydrolysis of various renewable raw materials and biomass sources. Hydrolysis methods can be different, and various chemical agents and/or enzymes, including enzymatic complexes, can be used for these purposes [15,32,59,63,82,85]. Such processes, which include the conversion of any type of industrial or agricultural waste into a final biotechnological product through successive stages of chemical and biocatalytic transformation, which individually cannot give a similar product, have recently been termed "hybrid" ones [26,95,96]. The hydrolysates of raw materials containing target sugars for conversion to EPS are complex media which can contain various natural inhibitors of PS-producing cellular enzymes. Some products of the hydrolytic pretreatment of raw materials can also have an inhibiting effect on the PS producers. Therefore, it is more appropriate to use stabilized forms of BTCs in such hybrid processes. The complexity of biochemical transformations in such processes imposes additional limitations on the choice of raw materials. Thus, the nature of the raw material itself, the source of the desired substrate and its chemical composition undoubtedly affect the degree of conversion, the rate of the process, and the yield of the desired product (Table 1). For example, the maximum concentration of BC (4.5 g/L) and the productivity of the process (0.75 g/L/day) were achieved by cultivating immobilized K. xylinum B-12429 in a medium where enzymatic hydrolysates of Jerusalem artichoke tubers were used as a substrate. These parameters were higher than in the case of pure glucose being used as a substrate for the same cells [32,97]. During the transformation of sugars contained in micro-and macroalgae biomass hydrolysates under the action of the same biocatalyst, the maximum concentration of BC (2.6 g/L) was obtained using enzymatic hydrolysates of the Chlorella biomass. At the same time, the yield of BC in this medium was significantly lower since the hydrolysates of this raw material had high viscosity and low concentration of sugars suitable for conversion into BC [32]. The thickest and most durable BC film was formed when using immobilized BTCs in the medium with the Jerusalem artichoke tuber hydrolysate. The other characteristics of the obtained BC films were almost identical in these experiments. Hydrolysis of biomass to obtain carbohydrate substrates and their subsequent conversion into PS is often performed in different reactors, and these two processes thus occur successively. It is possible, however, to shorten the total duration of the EPS production process via using a single reactor instead of two so that both stages are running simultaneously. A similar approach used for obtaining ethanol or organic acids from renewable raw materials is usually called "simultaneous saccharification and fermentation" [25]. For such an organization of the EPS production process, it is important to find a compromise between the optimal conditions for the existence of several biocatalysts (hydrolytic enzymes and producer cells), which allows to achieve maximum PS yield (Figure 1). The same process option can also be implemented if the polysaccharide producer itself is able to synthesize hydrolytic enzymes, such as the pullulan producer Aureobasidium pullulans [98,99]. However, even in this case, the accumulation period of the maximum concentration of active hydrolases and the final product can be strongly separated in time, which means that until the enzymes are synthesized by cells, no significant accumulation of the target product will occur. In addition, part of the substrate in the medium will be consumed by the producing cells for the synthesis of the necessary enzymes. This means that the percentage of conversion of the initial substrate into the target product (EPS) and the level of its accumulation will be reduced compared to the usual two-stage "hybrid" process. Note that PS production can involve BTCs, which yield completely different products. However, the byproducts of the functioning of these BTCs can be used as substrates for PSproducing cells. Yeast cells are an example of such BTCs: the media after their cultivation in the form of distillery stillage (waste from production of ethanol via fermentation and consequent distillation) are often used as a basis of nutrient media for the PS-producing BTCs because stillage is rich in carbohydrates and organic nitrogen [44,58]. Other types of waste from biotechnological industries can also provide the nutrient media for PS accumulation. Thus, in the case of biocatalytic wastewater treatment with phototrophic microbial cells involving the accumulation of suspended biomass, which is subsequently used for the production of biodiesel, the waste is glycerin and biomass residues of microalgae and cyanobacteria. Both of these substances can be used to obtain polysaccharides [32,60]. When developing BTCs for such combined biotechnological solutions, it is necessary to ensure the biological compatibility of the processes involved. Namely, the waste produced after the first stage should contain the lowest possible concentration of substances that are toxic for BTCs used at the next stage. At the same time, the second-stage BTCs based on PS producers should have appropriate substrate specificity and increased stability during operation. The schemes in Figure 1 summarize the known approaches to developing various BTCs, and also serve as a kind of guide for creating new BTCs and developing "hybrid" processes with their participation. The "hybrid" processes can extend the range of possible raw materials to include non-traditional ones, help reduce the number of bioconversion stages and the total duration of the process and cause an increase in the conversion rate. At the same time, the main perspectives in the development of BTCs, in our opinion, lie primarily in the field of synthetic biology based on the use of nature-like solutions [77,94]. The systems of this type are strongly influenced by such phenomena as the QS and interactions between different cultures of microorganisms, including those realized via signaling molecules. At the same time, in developing such BTCs, it is important to take into account the possibility of the hydrolysis of these molecules under the action of hydrolytic enzymes that can be used in the EPS production processes that are combined into one stage. Factors Affecting the Properties of BTCs Used for PS Synthesis The nature of the substrate significantly affects the characteristics of the PS obtained using BTCs. In general, the factors influencing the synthesis of PS encompass the characteristics of the reaction medium itself, including the type of the carbon source used and those of the accumulated metabolic products of the cells comprising the BTCs, as well as the conditions of PS biosynthesis (temperature, pH, aeration of the medium, mixing rate, etc.). Changes in the components of the medium affect the growth of new generations of cells introduced to the process as BTCs, their metabolic activity and the orientation of the metabolism (synthesis and the level of synthesis of enzymes for a certain biochemical pathway), predetermining the formation of the target biopolymer directly or indirectly. The release of EPSs is usually most noticeable when the process with BTCs is carried out in media with a high concentration of carbon and a low (limiting) concentration of the nitrogen source. For example, BC samples obtained using Gluconacetobacter intermedius cells as BTCs and glycerol or xylose as the main carbon sources had a reduced degree of polymerization compared to those obtained in a medium with glucose [100]. This was due to the lower activity of enzymes involved in the synthesis of BC when using xylose and glycerol as substrates. BC synthesized by A. xylinum AJ3 cells had the highest porosity (80%) in the case of using sucrose as a substrate and lower ones in the case of other substrates: glucose (70%), fructose (66%), and glycerol (65%) [101]. The introduction of additives in the form of lignosulfonates (1 wt.%) in a reaction medium with G. xylinus cells as BTCs led to the synthesis of BC samples with an increased degree of crystallinity [102]. Among the various carbon sources, glucose, fructose, and sucrose are most often used to produce EPS (Table 1). However, metabolites (such as gluconic, acetic, and lactic acids) reducing the pH of the reaction medium often inhibit PS formation when glucose is used as the only carbon source. A higher yield of BC was observed when using a mixture of various sugars, for example, sucrose and fructose, whereas the presence of glucose in combination with other sugars led to a decrease in the yield of EPS [102]. Polyatomic alcohols, such as arabite and manite, can also be used as the only carbon source for BC synthesis, and an increase in synthesis efficiency compared to the case of glucose has been shown for these substrates [103]. Glycerol was successfully tested as a substrate (20 g/L) for the production of BC, and it was found that during the synthesis of the biopolymer without mixing in the medium with the mentioned concentration, the yield of the BC exceeded the level achieved with 50 g/L of glucose [104]. When cultivating Komagataeibacter sp. PAP1 cells in a medium containing soy whey hydrolysate, the produced BC samples had 108% greater tensile strength and 841% greater Young's modulus than those synthesized in a standard medium based on glucose [105]. BC samples produced by culturing G. xylinus cells in a medium containing corn stalks hydrolysate had a higher degree of crystallinity compared to those obtained in media with pure glucose [106]. Note that different microorganisms used as BTCs react differently to the changes in the composition of the medium [107]. Therefore, the choice of BTCs essentially determines the influence of the composition of the medium on the structure and the properties of the synthesized BC. Cultivation of cells synthesizing PS can be carried out in reactors that provide static conditions for the process or in those that allow mixing (Table 1). Static cultivation is a relatively simple and widely used method of obtaining EPS. As a rule, samples of biopolymers with a more regular structure and increased crystallinity are formed in this case [108,109]. The reactors with active mixing tend to produce biopolymers with a wider spread of molecular weight distribution, reduced crystallinity and, accordingly, decreased mechanical strength compared to those obtained under static conditions. This was observed, e.g., in the production of expanded PSs with BTCs [108,109]. Introduction of various additives into the reaction medium as well as changes in the process conditions (temperature, concentration of oxygen, etc.) can have a great impact on the properties of the polymers [110]. It is known, for example, that the synthesis of BC is predetermined by the appearance in the medium of such a substance as c-di-GMP. However, its artificial introduction into the medium leads to the activation of phosphodiesterases in the cells of BTCs, which catalyze the degradation of c-di-GMP. Therefore, only the appearance of this substance as a result of its synthesis by the cells themselves is beneficial for PS production [111]. Thus, in order to increase the synthesis of BC, it is necessary to stimulate the production of this substance by cells. However, c-di-GMP is a QS factor, that is, a substance that is synthesized by cells and is necessary for their transition to a quorum state. QS is a feature of highly concentrated populations which can be stabilized by the expression of "silent genes" and the synthesis of EPS with a simultaneous decrease in the rate of active cell growth. Therefore, it is necessary to stimulate the cells producing PS to transition to the QS, that is, to the programmed genetic response of cells, to increase their concentration per unit volume and their synthesis of c-di-GMF molecules for this purpose [32,112]. Thus, individual additives in the medium may not have the desired effect on the synthesis of PS. The concentration of oxygen in the reaction medium is an important factor affecting the synthesis of EPS. An excess of dissolved oxygen in the medium can lead to cell proliferation and growth, which can be counterproductive because part of the substrate is spent on this, and not on PS synthesis. The growth can, in turn, be accompanied by the accumulation of metabolites in the form of organic acids and ultimately lead to a decrease in the pH of the medium, which can cause a further decrease in the level of BC synthesis. Conversely, a too-low oxygen concentration in the medium can lead to a significant decrease in the rate of cell metabolism, including a decrease in the intensity of oxidative phosphorylation and ATP formation. Thus, the effect of aeration on BC synthesis by Gluconacetobacter xylinus DSM46604 cells was studied when varying this parameter in the range of 1.5-7.5 L/min at constant stirring of 150 rpm. The efficiency of the BC synthesis had a maximum at 5 L/min, and then decreased by 10% with a further increase in aeration rate [104]. Studies of optimal conditions for obtaining xanthan have shown that the temperature and pH at which the BTCs are applied have a notable influence on the yield of the final product. First of all, this is due to the activation of enzymes that are involved in the biosynthesis process. The optimal conditions for the growth of cells producing the EPS and the synthesis of xanthan are the temperature in the range of 28-30 • C and the pH of 7-8 [113]. With a temperature beyond the optimal range, the molecular weight of xanthan decreases, and pyruvate and acetyl residues appear in the biopolymer, which reduces the viscosity of the target solution. During the biosynthetic process, the pH of the medium decreases to values below 5 due to the formation of organic acids that leads to a sharp decrease in the accumulation of xanthan. However, the study of the effect of pH showed that the control of this parameter affects only the growth of bacterial cells but has no impact on the accumulation of PSs [114]. To obtain xanthan, it is necessary to have trace amounts of potassium, calcium, iron and magnesium salts in the medium. A small amount of organic acids, such as citric and succinic, introduced into the reactor increases the synthesis of PSs. However, the presence of fumaric acid residues leads to the formation of xanthan with a highly branched structure [114]. Note that among the many factors that can affect the functioning of BTCs and the process of obtaining a specific PS, there is often the one that makes the greatest contribution. For example, it is the aeration of the reaction medium in the case of curdlan synthesis. It is shown that aeration and mixing of the medium have a significant effect on the synthesis of curdlan since the synthesized polymer surrounding the cells prevents the transfer of oxygen from the medium to the cells. It was also found that the productivity of the curdlan process decreases with an increase in the volume of the nutrient medium in the reactor. Controlling the mixing rate and the filling factor of the reactor for curdlan accumulation allows maintaining the concentration of dissolved oxygen in the medium in the range of 45-60%, which is optimal for the synthesis of this EPS [115]. Sometimes there are several key conditions at once that must be fulfilled to optimize the PS synthesis. Thus, for most BTCs synthesizing succinoglycan, a temperature of 28-30 • C and neutral pH are the optimal conditions. A decrease in the pH of the reaction medium leads to a decrease in the synthesis of the EPSs [116,117]. Aeration of the medium is also necessary for the synthesis of succinoglycan since the accumulated polymer greatly changes the viscosity of the medium, limiting oxygen access to cells. It has been shown that the maximum concentration of the PSs is achieved during the cultivation of Agrobacterium radiobacter 1654 cells in a medium with sucrose at a mixing speed of 250 rpm and with molasses at a mixing speed of 300 rpm [116]. So, a variety of factors should be taken into account for such PS production processes. Another factor that can affect the functioning of PS producers in the case of immobilized cells is the carrier used (Table 1). In particular, the screening and selection of carriers for the immobilization of Aureobasidium pullulans cells (synthetic materials, waste from the agro-industrial complex, and mineral materials) was performed. The successful immobilization of cells on all the studied carriers was shown, and their ability to produce fructooligosaccharides (FOS) was investigated [78]. Polyurethane foam was one of the most effective synthetic carriers, providing immobilization of up to 75% of injected cells for obtaining BTCs. This made it possible to increase the output of FOS by 12% (w/w) compared to free cells. Although the walnut shell had a much lower immobilization efficiency (less than 20% (by weight), the highest FOS performance was unpredictably found for this material with an increase in yield by 25% (by weight). Thus, according to the results obtained, the use of walnut shells as a recyclable and inexpensive carrier, which is actually a waste of processing natural raw materials, for the immobilization of A. pullulans for the production of FOS turned out to be an interesting alternative to synthetic porous carriers. Preparation of Nanocomposites in the Process of Biocatalytic PS Synthesis A separate direction in the application of BTCs for PS synthesis is the production of nanocomposites (NCs) with various target properties. Particular attention in this research is paid to such characteristics of the final product as modulus of elasticity, elongation, and tensile strength [118]. Most often, such NC are produced after PSs have already been synthesized using BTCs. For example, to obtain composite fibrous materials with antibacterial properties, BC samples were modified with poly-ε-caprolactone or poly(hydroxylbutyrate), and then functionalized with an enzyme-polyelectrolyte complex of quorum-quenching enzymes, such as hexahistidine-tagged organophosphorus hydrolase stabilized by poly(glutamic acid) or by suspension of tantalum nanoparticles [119]. It was proposed to obtain a new multifunctional three-dimensional material for bone tissue engineering by coelectrospinning using a combination of poly(hydroxybutyrate-cohydroxyvalerate), poly(ε-caprolactone) fibers with an antibiotic and pullulan with diatomite inclusions [120]. It was found that the usually obtained chaotically oriented spider-like shape of BC nanofibers prevents the realization of their full potential for some applications, because molecules of such shapes are difficult to modify to produce nanocomposites. This increases the interest in the processes of obtaining nanocomposite materials directly within the synthesis of PS under the action of BTCs (Table 3). Producing NCs in the single-stage synthesis of PS is possible via introducing various additives directly into the reaction medium during the functioning of BTCs. For example, a nanostructured Fe-containing polysaccharide hydrogel can be obtained by using Fe-reducing strain of K. oxytoca as BTCs and ferric citrate as the main substrate. Such NCs are considered promising for use in the field of medicine in tissue engineering, diagnostic magnetic resonance imaging, DNA detection, intracellular tags, etc. [121]. Matrix for obtaining a "synthetic synbiotic" consortia for testing in vitro and in vivo, an effective cell delivery system to the host body; the basis for the creation of synbiotic dairy products with a preventive effect; matrix for immobilized probiotic cultures of microorganisms * Parameter was estimated by the authors of the review based on the data in corresponding publications or taken from the references. A simple method for producing BC-based composites involves adding graphene oxide additives to a medium with BTCs [122,123,130]. Graphene oxide was bound to BC via hydrogen bonds in the resulting NCs when a natural consortium isolated during the acetate fermentation of Golden Delicious apples and capable of producing BC was used as BTCs in the presence of graphene oxide. The obtained BC/GO composites look attractive for use as dressing materials with prolonged drug release. BC/GO composite membranes were characterized by sorption of paracetamol (60 mg/g BC) with large desorption time of the pharmaceutical substance: 80% of paracetamol was released within 24 h [122]. Other NCs based on BC and graphene oxide were characterized by a low rate of ibuprofen release [130]. In media with other BTCs (e.g., Komagataeibacter xylinus ATCC 11142), the presence of reduced graphene oxide (RGO) in various concentrations significantly changed the characteristics of BC synthesis. Higher yields of graphene oxide NCs with BC were achieved. Strongly integrated graphene nanoparticles between BC nanofibers were obtained via introduction of RGO at a concentration of 2 wt.% in the reaction medium of BTCs. Note that a gradual decrease in the content of RGO was observed during the synthesis of BC with the formation of thick BC/RGO hydrogels at the air-water interface. As expected, hydroxyl functional groups present in BC (which are formed during in situ biosynthesis) created conditions that had a significant impact on the RGO incorporation into the NCs. This resulted in an improved distribution of RGO nanoparticles between BC nanofibers, forming a percolation mesh structure, which led to an improvement in the electrical conductivity and mechanical properties of the resulting nanocomposite. BC/RGO films formed after in situ biosynthesis were light, very flexible, and had a high tensile strength [123]. The strategy of including graphene into NCs within the PS synthesis process led to the production of NCs with properties comparable to those of BC chemically modified ex situ [131,132]. Functional nanomaterials for bone tissue regeneration were proposed to be obtained during the one-stage synthesis of BC by introducing carbon nanotubes (CNTs) into a reaction medium with G. xylinus KCCM 40216 cells used as BTCs. At the same time, the pretreatment of CNTs with an amphiphilic comb-like polymer synthesized by free radical polymerization of methyl methacrylate, poly(ethylene glycol) methacrylate, poly (ethylene glycol), or methyl ether of methacrylate not only facilitated the dispersion of CNTs, but also induced the hybridization of CNTs and BC [124]. A simple method for orientation of nanofibers was developed due to the effect of an electric field (10 mA) on such BTCs as Gluconacetobacter xylinus CGMCC 2955 during PS synthesis, when 20% wt.-oriented glass fiber was injected into the reaction medium. The resulting nanocomposite "BC-glass fiber" demonstrated a significant increase in the tensile strength and thermal stability [125]. It was found that in the single-stage production of nanocomposites, additives introduced into the medium with BTCs can interact with the BTC cells, influence their productivity, and can also directly bind to the PS fibrils during their synthesis. Thus, additives can affect the yield, structure, morphology, and physical properties of the final nanocomposite, or can contribute to giving PSs new properties, such as optical, antibacterial, and catalytic activity, and so on. Levan can bind to BC microfibrils, form hydrogen bonds, and thereby affect the size of BC microfibrils and the properties of the final NCs. Linear PSs, such as glucomannan and xylan, can also be incorporated into the BC matrix. The addition of other PSs, such as pectin, pullulan, agar, xanthan, etc., can help in improving the production activity of BTCs by increasing the viscosity of the reaction medium. PSs which increase the viscosity of the medium can protect bacterial cells from shear forces, thus preventing the formation of large clumps of BC, and help to enhance the BC production [127]. Additives of carboxymethyl cellulose can increase the production of the BC by a factor of 1.7. The introduction of pullulan into the medium can have a double impact: an increase in the viscosity of the reaction medium can have a positive effect on the rate of BC synthesis by the BTCs, and the increase in the interactions between the BC fibrils can lead to an improvement in the mechanical properties of the resulting NCs [127]. It is known that BC production can be increased with the addition of pullulan. The BC production can be increased from 0.447 g/L to 0.814 g/L and 1.997 g/L in the presence of pure pullulan in concentrations of 1.5 and 2.0%, respectively [127]. Another interesting approach to obtaining modified PS directly in the process of their biosynthesis is the joint cultivation of several different BTCs producing PS [134,135]. Composites based on PS and chemically synthesized polymers (having no direct natural analogues) are usually called "ecocomposites" or "biocomposites", and those based entirely on natural fibers and biopolymers are called either "green composites" or "green biocomposites" [136]. Note that the creation of synthetic consortia and the introduction of several BTCs into the joint biocatalytic process, known as consolidated bio-processing, is an interesting direction in the development of synthetic biology. This approach is widely used in industrial biotechnology, including the production of enzymes, food additives, antimicrobials, microbial fuel cells, etc. [135,137]. Thus, experiments were made for milk fermentation with the addition of probiotic strains of Lactiplantibacillus plantarum MWLp-12 and Limosilactobacillus fermentum MWLf-4 to the medium with bacteria Lactobacillus delbrueckii subsp. bulgaricus 6047, and Streptococcus thermophilus 6038, which can produce EPS. These tests have led to the production of fermented milk products with an increased content of EPS, and better characteristics in consistency, water retention capacity and acidity compared to using only the existing commercial fermentation bacteria [12]. The study of the effect of co-cultivation on the production of EPS by three strains of Lactobacillus rhamnosus (ATCC 9595, R0011, and RW-9595M) in combination with Saccharomyces cerevisiae cells showed that the yield of EPS in 48 h increased by 39%, 49%, and 42% when these microorganisms were co-cultured, compared with the case when the same strains were used individually [138]. Note also that, although lactic acid bacteria and yeasts are found in various fermented products, the molecular mechanisms associated with the microbial interactions and their effect on EPS biosynthesis are still insufficiently studied. In general, especial progress in obtaining NCs in the synthesis of various PS should be noted specifically for BC [135]. The introduction of producers of other PS (alginate, pullulan, etc.) into the media with BTCs catalyzing the synthesis of BC and their joint use for single-stage production of "green biocomposites" provides a decrease in the cost of their production compared to the processes organized in several stages. This approach also allows obtaining materials with new or modified properties [134]. In addition, intercellular interactions of various cells used as BTCs in the process of joint PS synthesis can have unexpected beneficial effects, for example, an increase in the synthesis rate of a particular PS. The method of co-cultivation of K. xylinus with other bacteria "in one pot" has been successfully applied to the synthesis of composites BC/hyaluronic acid [139] and BC/polyhydroxybutyrate [140]. To obtain BC-based composites with improved mechanical properties, the joint cultivation of two BTCs based on Gluconacetobacter hansenii ATCC 23769 and Escherichia coli ATCC 700728 cells under static conditions was proposed. It was found that during the use of BTCs, mannose-rich EPSs are synthesized by E. coli cells, and the EPSs were included into the BC matrix without a significant change in the level of BC crystallinity. For BC films obtained by co-cultivation of different cultures, Young's modulus and tensile stress increased by 81.9% and 79.3%, respectively, compared to those of the BC samples obtained in the absence of EPSs. When the two BTCs were used together, the synthesis rate and the output of both BC and EPSs were increased. The maximum concentration of EPSs during the co-cultivation of E. coli and G. hansenii was observed after 24 h and was 11.6 mg/L, whereas for the E. coli monoculture, it was more than twice lower (5.1 mg/L) [128]. Thus, there are studies in which EPSs are injected into the medium directly during BC synthesis by certain BTCs to obtain NCs based on BC, and similar NCs are obtained when BTCs producing similar EPSs are loaded directly into the medium for simultaneous biosynthesis of BC. Of course, it was interesting to compare the characteristics of similar PSs produced with these two approaches. Such studies and comparative analyses of BC-based NCs obtained during the implementation of two strategies were in fact performed: (1) co-fermentation of Komagataeibacter hansenii, a BC producer, with Aureobasidium pullulans, producing pullulan; and (2) synthesis of BC using a monoculture of Komagataeibacter hansenii with the addition of previously synthesized pullulan into the reaction medium [118,127]. It was found that in both cases, the obtained NCs had an increased Young's modulus compared to BC without pullulan. However, when two BTCs were cultured together, the degree of glucose consumption for 168 h was lower than in the process with monoculture and the addition of ready-made pullulan. For co-cultivation, it was suggested that the growth of A. pullulans disrupted the growth of K. hansenii, since less glucose was consumed, and less BC was produced. After 7 days of the joint cultivation of two BTCs, a significant amount of glucose remained in the broth (~30 g/L). Thus, for joint cultivation, it is important to have positive contact between microorganisms used as fusion participants in co-cultivation processes. To achieve this, it is necessary to make a careful choice of BTCs among possible candidates, as well as selecting substrates that reduce the level of competition between the cells of combined BTCs in order to analyze the effect of media composition. This thesis was confirmed by the experiment which showed that during the joint cultivation of BTCs, the rate of PS synthesis and the mechanical properties of the NCs can vary depending on the composition of the medium, as well as the type of reactor used [118]. It was noted that various wastes, including molasses (sugar production waste), can be successfully used as a cost-effective substrate for one-stage synthesis of NCs, whereas substrates based on food sugars provided much worse results [129]. Conclusions The reviewed studies demonstrate the key role that BTCs play in the synthesis of EPSs. On the one hand, BTCs impose certain restrictions on the implementation of these processes, in terms of the media composition, combinations with other BTCs, process conditions, etc. On the other hand, developing novel BTCs and especially combining BTCs in hybrid processes has huge potential for involving new classes of wastes into the production of EPSs suitable for a variety of practical applications. The EPSs with enhanced properties, including nanocomposites, can be obtained either via adding new functional components to the existing processes of PS production, or via combining several BTCs in a single process for supplying the initial components for the target composites in real time. The use of such BTCs ensures the production of new composite materials based on EPSs directly in their synthesis. Such creation of materials with fundamentally different characteristics opens new areas of their application. Understanding the principles of "living" biocatalysts functioning in such processes allows purposefully controlling biocatalysis, improving the characteristics of the processes themselves, and obtaining new materials with a variety of functions. Data Availability Statement: The data presented in this study are available on request from the corresponding author.
2022-11-10T17:01:58.207Z
2022-11-07T00:00:00.000
{ "year": 2022, "sha1": "a4a38f017a0985b405c279ba5118b24cd61af846", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4344/12/11/1377/pdf?version=1667812010", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "5dd6d873a35f6267b97cd955ef56a810d844b4f7", "s2fieldsofstudy": [ "Chemistry", "Biology", "Materials Science", "Environmental Science" ], "extfieldsofstudy": [] }
8700788
pes2o/s2orc
v3-fos-license
Overexpression of YWHAZ relates to tumor cell proliferation and malignant outcome of gastric carcinoma Background: Several studies have demonstrated that YWHAZ (14-3-3ζ), included in the 14-3-3 family of proteins, has been implicated in the initiation and progression of cancers. We tested whether YWHAZ acted as a cancer-promoting gene through its activation/overexpression in gastric cancer (GC). Methods: We analysed 7 GC cell lines and 141 primary tumours, which were curatively resected in our hospital between 2001 and 2003. Results: Overexpression of the YWHAZ protein was frequently detected in GC cell lines (six out of seven lines, 85.7%) and primary tumour samples of GC (72 out of 141 cases, 51%), and significantly correlated with larger tumour size, venous and lymphatic invasion, deeper tumour depth, and higher pathological stage and recurrence rate. Patients with YWHAZ-overexpressing tumours had worse overall survival rates than those with non-expressing tumours in both intensity and proportion expression-dependent manner. YWHAZ positivity was independently associated with a worse outcome in multivariate analysis (P=0.0491, hazard ratio 2.3 (1.003–5.304)). Knockdown of YWHAZ expression using several specific siRNAs inhibited the proliferation, migration, and invasion of YWHAZ-overexpressing GC cells. Higher expression of the YWHAZ protein was significantly associated with the lower expression of miR-375 in primary GC tissues (P=0.0047). Conclusion: These findings suggest that YWHAZ has a pivotal role in tumour cell proliferation through its overexpression, and highlight its usefulness as a prognostic factor and potential therapeutic target in GC. Gastric cancer is the second leading cause of cancer-related deaths in the world (Parkin et al, 2005). Recent advances in diagnostic techniques and peri-operative management have increased early detection of gastric cancer and decreased the mortality rate. However, patients with advanced disease still frequently develop recurrent disease after extended radical resections, and consequently have extremely poor survival rates (Martin et al, 2002). Many genes have been analysed in an attempt to understand the molecular mechanisms of human gastric cancers and improve its clinical outcomes; however, only a few with frequent alterations have been identified (Ushijima and Sasako, 2004). Oncogenic activation of b-catenin and K-ras (Park et al, 1999;Lee et al, 2002), amplifications of MET and ERBB2, mutations of TP53, APC, and E-cadherin (Becker et al, 1994;Maesawa et al, 1995), inactivation of the mismatch repair gene hMLH1 associated with microsatellite instability (MSI) (Fang et al, 2003), and hypermethylation of p16 have been repeatedly reported (Oue et al, 2002;Ding et al, 2003). As shown in these reports, studies have attempted to identify the biological factors involved in the malignant potential of gastric cancer. However, in clinical settings, only a few genes have been assayed as therapeutic targets and/or diagnostic biomarkers (Bang et al, 2010), suggesting that novel genes associated with the progression of gastric cancer need to be identified. Recently, YWHAZ has been identified as a clinically relevant prognostic marker for breast cancer, lung cancer, and head and neck cancer (Fan et al, 2007;Matta et al, 2008;Lu et al, 2009;Neal et al, 2009) and could allow for the identification of patients with a potentially poor prognosis to receive more aggressive treatment. Moreover, it has been reported that YWHAZ can be posttranscriptionally regulated by microRNA miR-375 in vitro, leading to reductions in cell growth in gastric cancer cell lines (Tsukamoto et al, 2010). However, to date, there has been no report on the clinical and prognostic significance of YWHAZ in patients with primary gastric cancer. These findings prompted us to determine the clinicopathological and prognostic significance of YWHAZ overexpression/activation in primary gastric cancer. Consequently, we demonstrated that YWHAZ was frequently overexpressed in gastric cancer lines and primary gastric cancers. The overexpression of YWHAZ was a poor prognosticator independent of other prognostic factors. Moreover, downregulation of YWHAZ expression was demonstrated to suppress cell proliferation, migration, and invasion in gastric cancer cell lines, and the overexpression of YWHAZ may be significantly associated with the lower expression of miR-375 seen in primary samples. Our results provided evidence that YWHAZ may be an important molecular marker for determining malignant properties and a target for molecular therapy in patients with gastric cancer. MATERIALS AND METHODS Gastric cancer cell lines and primary tissue samples. A total of seven gastric cancer cell lines such as KatoIII, NUGC4, HGC27, MKN7, MKN28, MKN45, and MKN74 were used. HGC27 cells were cultured in Dulbecco's Minimum Essential Medium (DMEM): F12 medium and the others were cultured in Roswell Park Memorial Institute (RPMI)-1640 medium (Sigma, St. Louis, MO, USA). All media were purchased from Sigma, and supplemented with 100 ml l À 1 FBS (Trace Scientific, Melbourne, Australia). All cell lines were cultured in 50 ml l À 1 carbon dioxide at 37 1C in a humidified chamber. Primary tumour samples of gastric cancer had been obtained from 109 consecutive gastric cancer patients who underwent curative gastrectomy (R0) at the Division of Digestive Surgery, Department of Surgery, Kyoto Prefectural University of Medicine (Kyoto, Japan) between 2001 and 2003, and were embedded in paraffin after 24 h of formalin fixation. Relevant clinical and survival data were available for all patients. Written consent was always obtained in a formal styleafter approval by the local Ethics Committee. None of these patients underwent endoscopic mucosal resection, palliative resection, preoperative chemotherapy, or radiotherapy, and none of them had synchronous or metachronous multiple cancers in other organs. Disease stage was defined in accordance with the International Union against Cancer Tumour-lymph node-metastases (TNM) classification (7th edition SobinLH, Wittekind (2009)). The median follow-up period for surviving patients was 54.6 months (ranging from 0.5-66.7 months). Western blotting. Anti-YWHAZ rabbit polyclonal antibody and anti-GAPDH antibody were purchased from Santa Cruz Biotechnology (Santa Cruz, CA, USA). YWHAZ antibody is an affinitypurified rabbit polyclonal antibody raised against a peptide mapping within a relatively divergent domain of YWHAZ. Cells were lysed and their proteins were extracted by M-PER Mammalian Protein Extraction Reagent (Thermo Scientific, Rockford, IL, USA). We used the MKN74 cell line as a positive control as its strong protein expression has been reported previously (Tsukamoto et al, 2010). Immunohistochemistry. Tumour samples were fixed with 10% formaldehyde in PBS, embedded in paraffin, sectioned into 5-mm thick slices, and subjected to immunohistochemical staining of the YWHAZ protein with the avidin-biotin-peroxidase method as described by Naoi et al (Naoi et al, 2008). In brief, after deparaffinization, endogenous peroxidases were quenched by incubating the sections for 20 min in 3% H 2 O 2 . Antigen retrieval was performed by heating the samples in 10 mmol l À 1 citrate buffer (pH 6.0) at 95 1C for 60 min. After treatment with Block Ace (Dainippon Sumitomo Pharmaceutical, Osaka, Japan) for 30 min at room temperature, sections were incubated at 4 1C overnight with an anti-YWHAZ (1 : 500) antibody. The avidin-biotin-peroxidase complex system (Vectastain Elite ABC universal kit; Vector Laboratories Inc., Burlingame, CA, USA) was used for colour development with diaminobenzidine tetrahydrochloride. Slides were counterstained with Mayer's hematoxylin. A formalin-fixed gastric cancer cell line overexpressing YWHAZ (MKN28), in which 450% of cells showed staining of each protein, was used as a positive control, whereas a formalinfixed gastric cancer cell line with low expression of YWHAZ (HGC27) and MKN28 staining without the YWHAZ antibody was included as a negative control. For scoring YWHAZ expression, the intensity (intensity score: 0 ¼ negative, 1 ¼ weak, 2 ¼ moderate, and 3 ¼ strong) and percentage of the total cell population (proportion score: 0o10%, 10%p1p33%, 34%p2p66%, 67%p3p100%) that expressed YWHAZ was evaluated for each case. In primary cases, YWHAZ protein expression was negative in most of the non-tumorous gastric mucosa and stroma. In all cases analysed, cases showing the negative expression as non-tumorous gastric mucosa and stroma were regarded as intensity score 0, whereas cases showing the strongest YWHAZ protein expression were regarded as intensity score 3. The remaining cases were divided into intensity score 1 and 2, respectively, according to the level of YWHAZ protein expression. The expression of YWHAZ was regarded as high expression (intensity plus proportion scores X4 of tumour cells showing immunopositivity) or low expression (intensity plus proportion scores p3 of tumour cells showing immunopositivity) using high-powered ( Â 200) microscopy (Tsuda, 2008). The expression of YWHAZ was evaluated considering the status of both cytoplasm and nucleus expression. If YWHAZ protein expression was recognised in both the cytoplasm and nucleus, the level of higher protein expression status in the cytoplasm and nucleus was employed. Morever, if YWHAZ protein expression was recognised in either the cytoplasm or nucleus, the level of its protein expression was employed. Loss-of-function by small interfering RNA (siRNA) and cell growth analysis. For knocking down endogenous YWHAZ expression, each of the small interfering RNA (siRNAs) targeting YWHAZ (Stealth RNAi siRNA no. HSS111442, no. HSS111443, and no HSS111444; Invitrogen, Carlsbad, CA, USA) or a negative control (Negative universal control Med, Invitrogen) was transfected into cells (10 nmol l À 1 ) using Lipofectamine RNAiMAX (Invitrogen) according to the manufacturer's instructions. Knockdown of the YWHAZ gene was confirmed by western blotting. For measurements of cell growth, the number of viable cells at various time points after transfection was assessed by the colorimetric water-soluble tetrazolium salt (WST) assay (Cell counting kit-8; Dojindo Laboratories, Kumamoto, Japan) (Komatsu et al, 2009). Transwell migration and invasion assays. Transwell migration and invasion assays were carried out in 24-well modified Boyden chambers (transwell-chamber, BD Transduction, Franklin Lakes, NJ, USA). The upper surface of 6.4-mm diameter filters with 8 mm pores was precoated with (invasion assay) or without (migration assay) Matrigel (BD Transduction). siRNA transfectants (2 Â 10 4 cells per well) were transferred into the upper chamber. Following 48 h of incubation, migrated or invasive cells on the lower surface of filters were fixed and stained with the Diff-Quik stain (Sysmex, Kobe, Japan), and stained cell nuclei were counted directly in triplicate. We assessed the invasive potential by calculating the number of cells, which is the ratio of the percentage invasion through the Matrigel-coated filters relative to migration through the uncoated filters of test cells over that in the control counterparts. Tissue RNA extraction and protocol for the detection of microRNAs. For formalin-fixed paraffin-embedded tissues, total RNA was extracted from four slices 15-mm thick (total thickness of 60 mm) using a RecoverAll Total Nucleic Acid Isolation Kit (Ambion, Carlsbad, CA, USA), and eluted into 60 ml of Elution Solution according to the manufacturer's instructions. The amounts of miRNAs were quantified in duplicate via qRT-PCR using human TaqMan MicroRNA Assay Kits (Applied Biosystems, Foster City, CA, USA). The reverse transcription reaction was carried out with a TaqMan MicroRNA Reverse Transcription Kit (Applied Biosystems) in 15 ml containing 5 ml of RNA extract, 0.15 ml of 100 mM dNTPs, 1 ml of Multiscribe Reverse Transcriptase (50 U ml À 1 ), 1.5 ml of 10 Â Reverse Transcription Buffer, 0.19 ml of RNase inhibitor (20 U ml À 1 ), 3 ml of gene-specific primer, and 4.16 ml of Nuclease-free water. For the synthesis of cDNA, the reaction mixtures were incubated at 16 1C for 30 min, at 42 1C for 30 min, at 85 1C for 5 min, and then held at 4 1C. Next, 1.33 ml of cDNA solution was amplified using 10 ml of TaqMan 2 Â Universal PCR Master Mix with no AmpErase UNG (Applied Biosystems), 1 ml of gene-specific primers/probe, and 7.67 ml of Nuclease-free water in a final volume of 20 ml. Quantitative PCR was run on a 7300 Real-time PCR system (Applied Biosystems) and the reaction mixtures were incubated at 95 1C for 10 min, followed by 40 cycles of 95 1C for 15 s and 60 1C for 1 min. Cycle threshold (Ct) values were calculated with SDS 1.4 software (Applied Biosystems). The expression of miRNAs from tissue samples was normalised using the 2-DDCT method relative to U6 small nuclear RNA (RNU6B). Statistical analysis. The clinicopathological variables pertaining to the corresponding patients were analysed for significance by the w 2 or Fisher's exact test. For the analysis of survival, Kaplan-Meier survival curves were constructed for groups based on univariate predictors and differences between the groups were tested with the log-rank test. Univariate and multivariate survival analyses were performed using the likelihood ratio test of the stratified Cox proportional hazards model. Differences between subgroups were tested with the non-parametric Mann-Whitney U-test. Differences were assessed with a two-sided test and considered significant at the Po0.05 level. RESULTS Protein expression of YWHAZ in gastric cancer cell lines. Western blotting analysis was performed using a YWHAZ-specific antibody ( Figure 1A) to test how much of the YWHAZ protein was expressed in seven gastric cancer cells such as KatoIII, NUGC4, HGC27, MKN7, MKN28, MKN45, and MKN74. YWHAZ overexpression was observed in almost all cells except for the HGC27 cell line (six out of seven lines, 85.7%), suggesting that this gene is a target for activation in gastric cancer cell lines. A formalin-fixed gastric cancer MKN45 cell line presenting the overexpression of YWHAZ, in which 450% of cells showed staining, was used as a positive control, whereas a formalin-fixed gastric cancer HGC27 cell line (data not shown) and MKN45 staining without the YWHAZ antibody presenting low expression of YWHAZ was included as a negative control ( Figure 1B). Immunohistochemical analysis of YWHAZ expression in the primary tumours of gastric cancer. As the YWHAZ protein was overexpressed in some gastric cancer cell lines, it was hypothesised that YWHAZ was also highly expressed in gastric cancer tissues and assumed to be part of carcinogenesis and malignant outcomes. The clinicopathological significance of YWHAZ expression was examined in primary tumour samples of gastric cancer based on the immunohistochemical staining pattern of this protein. Specific immunostaining of the YWHAZ protein in primary samples was confirmed using cell lines as positive or negative controls ( Figure 2A). Expression of the YWHAZ protein was observed in both the cytoplasm and nucleus of cancer cells. We classified 141 gastric cancer tumours into positive and negative groups according to the intensity and proportion of YWHAZ staining among tumour cells as described in the Materials and Methods. In primary cases, YWHAZ protein expression was negative in most of the non-tumorous gastric mucosal cell population (intensity score 0) and weakly positive in the gastric fundic gland. Supplementary Table 1 shows the distribution of patients with YWHAZ immunoreactivity in tumour cells according to the extent of intensity and proportion. Kaplan-Meier survival estimates showed that YWHAZ immunoreactivity in tumour cells was significantly associated with a worse overall survival according to the extent of the intensity and proportion (Figures 2C, 2D). In the total scores of the intensity plus proportion, the high expression group of YWHAZ, presenting scores X4 of tumour cells showing immunopositivity, significantly presented poorer prognosis than the low expression group (P ¼ 0.0018, log-rank test) ( Figure 2B). Concerning separate analyses of YWHAZ protein expression between the cytoplasm and nucleus, as shown in Supplementary Figure 1A and 1B, high expression of the YWHAZ protein in the cytoplasm (P ¼ 0.0461, log-rank test) was associated more with poor prognosis than that in the nucleus (P ¼ 0.4049, log-rank test). According to our results, expression of the YWHAZ protein in the cytoplasm may have a pivotal role on malignant outcomes in patients with gastric cancer. Association between YWHAZ protein levels and clinicopathological characteristics in primary cases of gastric cancer. The relationship between expression of the YWHAZ protein and clinicopathological characteristics is summarised in Table 1. Protein expression of YWHAZ was significantly associated with larger tumour size, venous invasion, lymphatic invasion, deeper depth of invasion, and higher pathological stage and recurrence rate, whereas other characteristics including histological grade were not. In the Cox proportional hazard regression model (Table 2), univariate analyses demonstrated that YWHAZ protein expression, age, location, tumour size, venous invasion, lymphatic invasion, pT category, pN category, and the stage of TNM classification were significantly associated with cause-specific survival. When the data were stratified for multivariate analysis using both forward and backward stepwise Cox regression procedures, YWHAZ immunoreactivity in tumour cells remained significant at Po0.05 (hazard ratio, 2.3 (1.003-5.304)) for overall survival in all patients, suggesting that immunoreactivity is an independent predictor of overall survival. Suppression of cell proliferation by downregulation of YWHAZ expression. To gain insight into the potential role of YWHAZ as an oncogene whose overexpression could be associated with gastric carcinogenesis, we first performed a cell proliferation assay using siRNA specific to YWHAZ to investigate whether knockdown of YWHAZ expression could suppress the proliferation of gastric cancer cells showing overexpression of the gene. In the MKN74 and MKN28 cell lines, expression of the YWHAZ protein was efficiently knocked down 24-72 h after transient introduction of a YWHAZ-specific siRNA (siRNA-YWHAZ) over a control siRNA (siRNA-control) ( Figure 3A). The proliferation of MKN74 and MKN28 cells was significantly lower than those of controls by 42.1 and 28.7%, respectively, after the knockdown of endogenous YWHAZ expression. Suppression of cell migration and invasion by downregulation of YWHAZ expression. Next, a Matrigel invasion assay was performed to examine the invasive potential of MKN74 (data not shown) and MKN28 cells ( Figure 3B) transfected with siRNA-YWHAZ. The number of cells that migrated through the uncoated (migration assay) or Matrigel-coated (invasion assay) membrane into the lower chamber was significantly lower in siRNA-YWHAZ transfected cells than in siRNA-control transfected cells, suggesting that YWHAZ has an invasive potential in gastric cancer cells. Evaluation of whether miR-375 expression is associated with the extent of YWHAZ immunoreactivity in primary gastric cancer tumours. As shown in a previous report, tumour-suppressive miR-375 is downregulated in gastric cancer tumours and regulates cell survival by targeting YWHAZ in gastric cancer cell lines (Tsukamoto et al, 2010). However, it remains unknown whether miR-375 expression is associated with the extent of YWHAZ immunoreactivity in primary gastric cancer tumours. As a result, the expression of miR-375 in 10 tumours with overexpression of the YWHAZ protein was significantly lower than that in 10 tumours with low expression of the YWHAZ protein (P ¼ 0.0047, Figure 3C). DISCUSSION The YWHAZ gene, which encodes the 14-3-3z protein, is located on chromosome 8q22.3 and this area is frequently amplified in breast and other cancers (Pollack et al, 2002;Garnis et al, 2004). YWHAZ is included in the 14-3-3 family of proteins, which are a family of evolutionarily highly conserved acidic proteins expressed in all eukaryotic organisms (Aitken, 2006). Moore and Perez first discovered 14-3-3 in 1967 when they fractionated soluble proteins from brain tissue (Moore et al, 1968). In mammals, there are seven distinct isoforms: b, g, e, z, Z, s, and t that are encoded by seven different genes. 14-3-3 proteins have been found to interact with target proteins involved in the regulation of multiple cellular processes, such as cell cycle control, protein trafficking, antiapoptosis, metabolism, signal transduction, inflammation, and cell adhesion/motility (Wilker and Yaffe, 2004;Morrison, 2009). YWHAZ has been identified as a clinically relevant prognostic marker for breast cancer Neal et al, 2009), lung cancer (Fan et al, 2007), and head and neck cancer (Matta et al, 2008) and may allow for the identification of patients whose tumours are resistant to standard chemotherapies to receive more aggressive treatment (Frasor et al, 2006;Li et al, 2010). YWHAZ has been implicated in the initiation and progression of cancer and has been shown to be overexpressed in multiple cancer tissues and cell lines such as oesophageal cancer (Sharma et al, 2003), pancreatic cancer (Shen et al, 2004), colon cancer (Lu et al, 2007), and oral cancer (Arora et al, 2005), even if gene amplification was not always detected. Mechanisms independent of the increased gene copy number, such as the modulation of gene transcription, protein translation, or RNA and protein stability may also contribute to increased protein expression. These findings prompted us to determine the clinicopathological and prognostic significance of YWHAZ overexpression/activation in primary gastric cancer. However, to date, there has been no report on the clinical significance of YWHAZ in patients with primary gastric cancer. In the present study, it was hypothesised that the overexpression/activation of YWHAZ may promote tumour cell proliferation and/or survival in gastric cancer. To test this hypothesis, we examined the expression status of YWHAZ and the clinicopathological, as well as biological significance of its expression in cell lines and primary tumours of gastric cancer. Consequently, it was demonstrated that YWHAZ was frequently overexpressed in 51% (72/141) of gastric cancer patients and this overexpression was a poor prognosticator independent of other prognostic factors. The prognosis of gastric cancer patients was involved in both the intensity and proportion of YWHAZ activity in an expression-dependent manner. In addition, downregulation of YWHAZ expression suppressed cell proliferation, migration, and invasion in gastric cancer cell lines, although several previous studies reported similar results in other cancers. Namely, the overexpression of YWHAZ in breast cancer cell lines enhanced anchorage-independent growth and inhibited stress-induced apoptosis, whereas downregulation of YWHAZ reduced anchorage-independent growth and sensitised cells to stress-induced apoptosis (Neal et al, 2009). Also, in lung cancer, knockdown of YWHAZ sensitised cells to stress-induced apoptosis and enhanced cell adhesion and cell-cell contacts (Li et al, 2008;Niemantsverdriet et al, 2008). The significance of YWHAZ overexpression in gastric cancers remains unknown. In this study, we also determined the expression of tumour-suppressive microRNA miR-375 in primary gastric tumours based on the extent of YWHAZ expression. It has been reported that YWHAZ can be post-transcriptionally regulated by microRNA miR-375 in vitro, leading to the downregulation of both YWHAZ mRNA and protein, effectively altering the available pool of YWHAZ and reducing cell growth in gastric cancer cell lines (Tsukamoto et al, 2010). However, in primary samples, the association between YWHAZ and miR-375 has been not reported and remains unknown. As a result, we confirmed that the higher expression of the YWHAZ protein was significantly associated with the lower expression of miR-375 in primary samples. This result implies that epigenetic regulation through microRNAs is one of the mechanisms for the overexpression of YWHAZ in primary gastric cancer. Other fascinating reports are that the overexpression of YWHAZ occurs in the premalignant hyperplastic stages of oral and oesophageal cancers (Bajpai et al, 2008;Matta et al, 2007;Ralhan et al, 2009) and in atypical ductal hyperplasia and ductal carcinoma in situ (DCIS) of breast cancer (Wulfkuhle et al, 2002;Danes et al, 2008). These findings suggested that YWHAZ may contribute to carcinogenesis and the development of early stage cancers. Moreover, YWHAZ overexpression was found to be a 'second hit' in a subset of ERBB2-overexpressing DCIS lesions facilitating the transition from noninvasive DCIS to life-threatening invasive breast cancer through the activated TGF-b/Smads pathway leading to epithelial to mesenchymal transition (EMT) ). Indeed, co-overexpression of YWHAZ and ERBB2 in breast cancers from patients was significantly correlated with distant metastasis, poor prognosis, and higher rates of recurrence in breast cancer patients ). In gastric cancer, overexpression of the ERBB2 protein, which was also shown as HER2, was associated with poor prognosis (Yonemura et al, 1991;Uchino et al, 1993). In addition, a recent study demonstrated that a monoclonal antibody against HER2, Trastuzumab, in combination with chemotherapy (ToGa study) can be considered as a new standard option for patients with HER2-positive advanced gastric cancer (Bang et al, 2010); moreover, this treatment contributes to survival prolongation. Therefore, YWHAZ may be a key molecule for selecting prospective patients with malignant outcomes associated with HER2 expression in this chemotherapy. This issue is currently being evaluated. In conclusion, this is the first report demonstrating that YWHAZ has a pivotal oncogenic role and is a potential therapeutic target in gastric cancer. We showed frequent overexpression of the YWHAZ protein and its prognostic value in patients with gastric cancer. Although studies of larger cohorts are needed to validate these findings before moving to a clinical setting, our results may provide the possibility that YWHAZ is an important molecular marker for determining malignant properties and targets for molecular therapy in patients with this lethal disease. This work is published under the standard license to publish agreement. After 12 months the work will become freely available and the license terms will switch to a Creative Commons Attribution-NonCommercial-Share Alike 3.0 Unported License. Supplementary Information accompanies this paper on British Journal of Cancer website (http://www.nature.com/bjc)
2017-11-08T17:29:43.830Z
2013-02-19T00:00:00.000
{ "year": 2013, "sha1": "2eb81d58ea814cc50c9ab954ffc0da655f0b940e", "oa_license": "CCBYNCSA", "oa_url": "https://www.nature.com/articles/bjc201365.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "2eb81d58ea814cc50c9ab954ffc0da655f0b940e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
67841078
pes2o/s2orc
v3-fos-license
Determining water use of sorghum from two-source energy balance and radiometric temperatures Estimates of surface actual evapotranspiration (ET) can assist in predicting crop water requirements. An alternative to the traditional crop-coefficient methods are the energy balance models. The objective of this research was to show how surface temperature observations can be used, together with a two-source energy balance model, to determine crop water use throughout the different phenological stages of a crop grown. Radiometric temperatures were collected in a sorghum ( Sorghum bicolor ) field as part of an experimental campaign carried out in Barrax, Spain, during the 2010 summer growing season. Performance of the Simplified Two-Source Energy Balance (STSEB) model was evaluated by comparison of estimated ET with values measured on a weighing lysimeter. Errors of ±0.14 mm h−1 and ±1.0 mm d−1 were obtained at hourly and daily scales, respectively. Total accumulated crop water use during the campaign was underestimated by 5 %. It is then shown that thermal radiometry can provide precise crop water necessities and is a promising tool for irrigation management. Introduction Understanding the surface actual evapotranspiration (ET) is essential for managers responsible for planning and managing water resources, especially in arid and semi-arid regions where crop water demand generally exceeds precipitation and irrigation from surface and/or groundwater resources is then required to meet the deficit.This is particularly important in areas where water usage is regulated due Correspondence to: J. M. Sánchez (juanmanuel.sanchez@uclm.es) to ecological protection programs, limited resources, or competitive demand (Piccinni et al., 2009). Production of certain crops involves changes in land uses that might compromise water conservation strategies.This is of utmost importance in regions where determining crop water requirements specific to each crop is key to providing growers with information to select which crops to grow and determine the timing and quantity of irrigation events throughout the growing season (MARA, 2009). Actual evapotranspiration varies regionally and seasonally according to weather conditions.The use of on-site meteorological data and crop coefficients enables the determination of crop water use.However, generic crop coefficients will not fulfill the need for precise irrigation applications, since they lack flexibility to account for temporal and spatial variation in crop water needs (Pinter et al., 2003), and specific crop coefficients need to be developed (López-Urrea et al., 2009a,b,c).This can be a limitation for providing spatially distributed regional ET information.The utility of the crop surface temperature to detect crop water stress has long been recognized, based on the fact that under stress conditions, the water transpired by the plants evaporates and cools the leaves, whereas in a water deficit situation, transpiration is scarce and canopy temperature increases (González-Dugo et al., 2006;Pinter et al., 2003;Gardner et al., 1992;Jackson et al., 1981).This theory has been used to develop indices that combine meteorological data with thermal remotely sensed information to provide relative measure of plant water status and health.The agricultural remote sensing literature abounds with examples of the application of thermal indices to schedule irrigations in various crops (e.g., Moran et al., 1994;Hatfield, 1983;Wang and Gartung, 2010). Published by Copernicus Publications on behalf of the European Geosciences Union.Some authors such as Faver and O'toole (1989) or Choudhury and Idso (1985) used canopy temperature together with the Penman-Monteith equation to estimate crop ET.These authors reported high correlation between lysimeter ET and modeled ET from sorghum and wheat for selected days.Similar results were also found by Hatfield et al. (1983) and Jackson et al. (1983) but now focusing on a particular time of day. Remotely sensed surface temperature as long been used too as a key input to determine ET as a residual of the land surface Energy Balance Equation (EBE) (e.g.Bastiaanssen et al., 1998;Su, 2002;Kustas and Norman, 1996;Pinter et al., 2003;Gavilan and Berengena, 2007).For example, Bashir et al. (2008) used the Surface Energy Balance Algorithm for Land (SEBAL) (Bastiaanssen et al., 1998), together with Landsat/ETM+ and MODIS images to estimate ET of a large irrigated sorghum area.Comparison with ET calculated using the water balance approach, for 4 selected days, showed an average absolute error around 0.9 mm d −1 .However, SEBAL requires heterogeneity in surface moisture conditions.A dry or "hot" pixel, where LE is assumed to be zero, and a wet or "cold" pixel, where H is expected to be zero, are required.Due to the difficulty to bring together these two extreme conditions, SEBAL is not applicable to small crop fields (Bastiaansen et al., 1998).These and some other problems have been pointed out due to difficulties in quantification of aerodynamic resistances, especially under partial fraction cover conditions (Hall et al., 1992).Two-Source Energy Balance models may solve some of these limitations by allowing the estimation of soil and canopy contributions to the total energy fluxes, including evapotranspiration (Norman et al., 1995;Li et al., 2005).Sánchez et al. (2008Sánchez et al. ( , 2009) ) showed the potential of a simplified version of the two-source energy balance model (STSEB), when direct measurements of radiometric surface temperature are available, in a corn crop and a forest ecosystem, respectively. In this paper, the STSEB model will be used together with thermal radiometry to determine ET values in a sorghum crop.Necessity to explore water use of sorghum as a potential biofuel source motivated the selection of this crop for the present study.In this work we focus on a forage sorghum field located in "Las Tiesas" experimental site in Barrax, Spain.A field campaign was carried out in the summer growing season of 2010 with the aim of studying water balance techniques and water necessities of energetic crops.A weighing lysimeter was placed in this site to register sorghum ET values, and two Infrared Thermal radiometers (IRT) were installed to measure surface temperatures. The main objective of this study is to present this method as a simple and feasible technique to determine short and long-term crop water use from thermal infrared radiometry and ancillary meteorological data, under clear and cloudy sky conditions and covering different stages of the crop development, that could be further used as an alternative to weighing lysimeters required to determine irrigation needs or to calibrate crop coefficient based algorithms.This paper is organized as follows.Information related to the study site, experimental set-up, and measured variables and parameters, is presented in Sect. 2. A summary of the main equations and aspects of the STSEB approach is given in Sect.3. Section 4 shows the analysis of the measured radiometric temperatures and the estimated surface energy fluxes in the sorghum field.Modeled values of hourly and daily ET, together with the comparison with lysimeter ET measurements, are included in this Section.Finally, main conclusions are given in Sect. 5. Study site and materials This study was conducted during the summer of 2010 in the "Las Tiesas" farm, located between Barrax and Albacete (Central Spain).Its geographical coordinates are: longitude 2 • 5 W, latitude 39 • 14 N, and its altitude is 695 m a.s.l.(Fig. 1a).The climate is semi-arid, Temperate Mediterranean with 320 mm of annual rainfall, mostly concentrated in the spring and fall.Average mean, maximum and minimum temperatures are: 13.7, 24.0 and 4.5 • C, respectively.For a more detailed description of the climate of the area see López-Urrea et al. (2006). The soil is classified as Petrocalcic Calcixerepts (Soil Survey Staff, 2006).Average soil depth of the experimental plot was 40 cm, and is limited by the development of a more or less fragmented petrocalcic horizon.Texture is silty-clayloam, with 13.4 % sand, 48.9 % silt and 37.7 % clay, with a basic pH.The soil is low in organic matter and in nitrogen, and has a high content of active limestone and potassium. To determine actual forage sorghum (Sorghum bicolor (L) Moench cv.H-133) ET, a weighing lysimeter was used (Fig. 1b).To schedule irrigation ET c values were calculated from daily mass loss minus drainage loss and the mass added from irrigations and/or rainfall.In the lysimeter lost water was replaced, maintaining non-limiting soil water content.The lysimeter is located in the center of a 100 m × 100 m plot, where sorghum sowed on 27 May in 2010 (DOY 147) in rows (N-S orientated) of 35-cm spacing.Plant population was 21 plants m −2 .Plant samples from three separate areas were obtained periodically to measure crop development.Leaf area index (LAI), fractional vegetation cover (P v ), and crop height (h) were measured from the three samples.Sorghum reached a maximum crop height of nearly 5 m, a maximum LAI of 11 m 2 m −2 , and the final harvest dry matter was around 2.7 kg m −2 .According to MARM (2009), the average yield in Spain for irrigated forage sorghum results 3.8 kg m −2 .Field harvest was on 23 September in 2010 (DOY 265). The whole plot has a permanent sprinkler irrigation system with sprinklers placed on a grid of 15 × 12.5 m that provide a precipitation rate of 8.6 mm h −1 .The lysimeter container is 2.7 m long, 2.3 m wide and 1.7 m deep, with an approximate total weight of 14.5 Mg.Efforts were made to keep the crop inside the lysimeter at the same growth rate and plant population (21 plants m −2 ) as the crop outside to minimize edge effects.The lysimeter soil-containing tank sits on a system of scales with a counterweight that offsets the dead weight of the soil and the tank.The de-multiplication factor of the system is 1000:1.A steel load cell (model SB2, Epelsa 1 Ind., S.L.) is connected to the system of balances. The balance-beam weighing system allows measurements of 1 Trade and company names are given for the benefit of the reader and imply no endorsement by the authors. ET in the lysimeter with a resolution of 0.04 mm equivalent water depth.The sample frequency was 1 s, and a reading was registered by a datalogger (CR10X, Campbell Scientific Ltd., Logan, Utah, USA) every 15 min .Additional information about the technical features of the lysimeter may be found in López-Urrea et al. (2006).The lysimeter readings were checked daily to identify individual readings that were not explained by natural processes of water input and loss.Data losses occurred during irrigation and precipitation events, weight and calibration verifications, and once, when the soil inside the lysimeter tank was cultivated.The resulting data were compiled to obtain the measurement of sorghum ET. Starting on 19 June (DOY 174), radiometric surface temperature was measured, using an Apogee SI-211 thermal Infrared Radiometer (IRT).This radiometer has a broad thermal band (6-14 µm) with an accuracy of ±0.3 • C, and 28 • field of view.It was placed at a height of 2 m above the canopy level at anytime, looking at the surface with nadir view (Fig. 1c,d).Sky brightness temperature was measured by a second Apogee radiometer pointing at the sky with an angle of 53 • (Rubio, 1998).These radiance values were used for the atmospheric correction of the surface temperature.IRTs were calibrated before the experiment.The calibration was done using a blackbody source (Model Land P80P).The calibration encompassed a wide range of temperatures (−5 to 50 • C), exceeding those experienced in the field.Unfortunately, measure of surface temperature failed after 9 September (DOY 249), reducing our study period to 75 days. Model description The net energy balance of the soil-canopy-atmosphere system is given by: where R n is the net radiation flux (W m −2 ), H is the sensible heat flux (W m −2 ), λET is the latent heat flux (W m −2 ), and G is the soil heat flux (W m −2 ).Some other minor terms such as photosynthesis, advection or canopy storage have been neglected in Eq. ( 1).The effective radiometric surface temperature in the same system, T R (K), can be obtained as a weighted composite of the soil temperature, T s (K), and the canopy temperature, T c (K): where ε c , and ε s , are the canopy and soil emissivities, respectively, ε is the effective surface emissivity, and P v (θ ) is the fractional vegetation cover for the viewing angle θ.Using Eq. ( 2), values of T c and T s can be retrieved from a system of two equations with 2 unknowns if measures of T R are available at two different view angles.However, an additional assumption is required when measures from the second view angle are missing (Norman et al., 1995;Sánchez et al., 2008). In this work, a Simplified version of a Two-Source configuration of the Energy Balance (STSEB) (Sánchez et al., 2008) was used.According to this approach, the addition between the soil and canopy contributions (values per unit area of component) to the total sensible heat flux, H s and H c , respectively, are weighted by their respective partial areas as follows: where P v (without a view angle argument) refers to the fraction cover at nadir view (i.e.θ = 0 • ).In Eq. ( 3), H s and H c are expressed as: where ρC p is the volumetric heat capacity of air (J K −1 m −3 ), T a is the air temperature at a reference height (K), r h a is the aerodynamic resistance to heat transfer between the canopy and the reference height at which the atmospheric data are measured (s m −1 ), r a a is the aerodynamic resistance to heat transfer between the point z 0M + d (z 0M : canopy roughness length for momentum, d: displacement height) and the reference height (s m −1 ), r s a is the aerodynamic resistance to heat flow in the boundary layer immediately above the soil surface (s m −1 ).A summary of the expressions to estimate these resistances can be seen in Sánchez et al. (2008).Equations ( 4) and ( 5) are taken from the parallel configuration of the TSEB model (Norman et al., 1995;Li et al., 2005), modified to take into account the distinction between r h a and r a a .This distinction is necessary since transport of heat and momentum is not equally efficient over the canopy (Sánchez et al., 2008).The partitioning of the net radiation flux, R n , between the soil and canopy is proposed as follows: where R nc and R ns are the contributions (values per unit area of component) of the canopy and soil, respectively, to the total net radiation flux.They are estimated by establishing a balance between the long-wave and the short-wave radiation separately for each component: where S is the solar global radiation (W m −2 ), α s and α c are soil and canopy albedos, respectively, σ is the Stefan-Boltzmann constant, and L sky is the incident long-wave radiation (W m −2 ).Note that longwave emission from one component over the other is not accounted since no direct coupling is considered between soil and vegetation in the STSEB scheme (Sánchez et al., 2008).A similar expression is used to combine the soil and canopy contributions, λET s and λET c , respectively, to the total latent heat flux: According to this framework, a complete and independent energy balance between the atmosphere and each component of the surface is established, from the assumption that all the fluxes act vertically.In this way, the component fluxes to the total latent heat flux can be written as: Finally, G can be estimated as a fraction (C G ) of the soil contribution to the net radiation (Choudhury et al., 1987): where C G can vary in a range of 0.2-0.5 depending on the soil type and moisture. Radiometric temperatures Growth cycle of the sorghum plants was captured by interpolation from the periodic measurements taken over the course of the experiment.A third order regression equation was used for the canopy height (Fig. 2a).For the fraction cover, measured values also fitted a third order equation for P v < 1, whereas a constant value of P v = 1 was assumed from DOY 200 to the end of the experiment (Fig. 2b).Under these conditions of full vegetation coverage the two-source scheme becomes a single-source approach, with the vegetation as the only component exchanging energy with the atmosphere.Then, T R = T c in Eq. ( 2), and transpiration is responsible of the total ET of the crop system.Also, differences between T c and T a are less than 1 • C for non-stressed canopies, which yields minor values for the sensible heat flux.Under these conditions λET becomes the dominant flux in the right term of Eq. ( 1).Since soil temperature measurements were not available in this study, for partial cover conditions we assumed T c ∼ T a and T s was inferred from Eq. ( 2) together with the measured T R values.Apogee IRT measurements were corrected for emissivity and atmospheric effect using the radiative transfer equation adapted to ground measurements (Sánchez et al., 2008).Values of ε c = 0.985 ± 0.011 and ε s = 0.960 ± 0.013 were used for this study (Rubio et al., 2003).Effective surface emissivity, ε, was calculated following the method proposed by Valor and Caselles (1996) (see Fig. 2b).The downwelling long-wave radiance, required for the atmospheric correction, was determined from the IRT values registered by the second Apogee pointing to the sky (Rubio, 1998). Temperatures of the sunlit and shaded portions of a component (soil or vegetation) differ some degrees.Thanks to the wide field of view of the Apogee radiometers, and their deployment configuration over the sorghum, measured values of T R , and estimated values of T s , accounted for both sunlit and shaded portions of the soil and canopy. Figure 3 shows four examples of the diurnal evolution of the gradient T R -T a , representative of different vegetation cover conditions: two for intermediate vegetation cover when the soil component is still visible (DOY 184,185), and two for full cover conditions (DOY 229,236).Furthermore, these four examples are also representative of a variety of cloud cover conditions: one cloud-free day (DOY 236), one fully overcast day (DOY 229), and a couple of partially cloudy days (DOY 184,185).Surface temperature was generally warmer than air temperature during the middle of the day.This difference was minimum for full vegetation cover conditions (<1 • C), and increased with the amount of soil exposed (Fig. 3).Irrigation was scheduled according to the water loss determined by the lysimeter throughout the growing season to ensure enough water availability for transpirational cooling, avoiding plant water-stress and then warming of the canopy temperature.At night, thermal inversion appeared and surface temperature was 2-3 • C cooler than air temperature.This difference can be even higher for rainfall or irrigation events.These temperature gradients control the exchange of H between the surface and the atmosphere, adding or reducing energy to the available R n . Modelled ET Surface temperature was used, together with registered solar radiation and downwelling long-wave radiation, to calculate R n from Eqs. ( 5) and ( 6).Values 0.13 and 0.23 (Castrignanò et al., 1997) were used for the soil and canopy albedo, respectively, although possible changes in albedo are possible. Wind speed measurements from the adjacent weather station (representative of the values over the sorghum due to the flat terrain and the 60-min temporal averages used) were used to calculate the aerodynamic resistances required in Eqs. ( 3) and ( 4).These resistances together with the surface and air temperature data yielded H results.A value of C G = 0.2, appropriate for wet soils (Choudhury et al., 1987), was assumed in Eq. ( 12) to estimate G values. Figure 4 shows hourly values of all flux components in Eq. ( 1).Note that, as a residual of the EBE, λET is principally controlled by R n and modulated by H .For our study period most available energy was partitioned to λET.H was the dominant term for first weeks after planting, when the fraction cover was still very low, but unfortunately measure of T R started on DOY 174 and data are not available for that period. Values of latent heat flux were converted in ET values, dividing by the latent heat of vaporization, λ, and compared to ET water loss registered by the lysimeter.Figure 5 shows four examples of the diurnal evolution of these hourly ET values.STSEB estimations of ET match the measured ET values under a wide range of vegetation cover fractions and cloudy sky conditions.Note that energy balance models yield ET values also under rainfall or irrigation conditions when the lysimeter measure is compromised.Two examples of this effect can be observed in plots 5b and 5c, where ET Hydrol.Earth Syst.Sci., 15,[3061][3062][3063][3064][3065][3066][3067][3068][3069][3070]2011 www.hydrol-earth-syst-sci.net/15/3061/2011/ measured drops as a consequence of an irrigation event after 22 hours DOY 185 and due to a rainfall event between 12 and 15 h DOY 229, respectively.Also, the quick response of the energy balance models to changes in environmental conditions can be observed in DOY 184, in which after a cloudy morning the sky clears between 12 and 14 h before getting cloudy again (Fig. 4a).Figure 5a shows how modelled ET values peak for this interval, consequence of the increase in available net radiation, whereas measured ET does not.Since night-time ET was generally negligible, hourly averages between 7 and 21 h were used for the quantitative test of the STSEB model.With this filtering we tried to avoid events such as irrigation (around midnight) or early morning dew.Rainfall events were also excluded from the hourly analysis.More than 1000 single observations were used for the comparison of the diurnal ET values (Fig. 6a).Besides the linear regression, the accuracy of prediction was quantified using the Root Mean Square Deviation (RMSD) between estimated and measured ET values.The systematic deviation was illustrated by the biased estimator (Bias), and the relative error by the Mean Absolute Percentage Difference (MAPD) (Willmott, 1982).Figure 6a shows the overall good agreement between modelled and measured instantaneous ET values.STSEB model tended to slightly overestimate and underestimate highest and lowest ET values, respectively.However, on average STSEB model reproduced lysimeter hourly ET measurements with negligible systematic deviation, and a RMSD of ±0.14 mm h −1 .Thus, a relative error of 22 % was obtained for instantaneous ET. Beyond the performance of a model at an instantaneous scale, what is really important from the point of view of the irrigation planning or the water saving is the capacity of a model to estimate daily ET values, and further cumulative water loss by evapotranspiration.Figure 7 shows the evolution of the daily ET values modelled and measured for the experiment duration.For a first stage, when the energy balance was still influenced by the exposed soil surrounding the sorghum plants, the average trend of daily ET was to increase with the vegetation fraction cover.Daily ET peaked by middle July, with values reaching 10 mm d −1 , and then decreased until beginning of September.Lowest daily ET values, close to 2 mm d −1 , were observed for some cloudy and rainy days by middle August.A total of 73 days were used for the quantitative comparison, with the only exclusion of rainy days with registered rainfall amounts over 5 mm.Modelled values underestimated by 0.3 mm d −1 lysimeter daily ET measurements, with a RMSD of ±1.0 mm d −1 (Fig. 6b).Thus, a relative error of 12 % was obtained for daily ET.A similar RMSD value of ±0.9 mm d −1 and underestimation of 6 % was observed when daily ET values were calculated using the standard FAO56 methodology (Allen et al., 1998).These results are in agreement with some recent works.For instances, Kato and Kamichika (2006) ET.For the 75-day period studied in this work a total ET of 524 mm was measured by the lysimeter, very close to the 500 mm estimated from the STSEB model together with sorghum radiometric temperatures as input.For the same period, the total rainfall registered was 33 mm, and a total irrigation of 506 mm was applied.Thus, cumulative ET obtained by the STSEB model underestimated 5 % the lysimeter register.These results illustrate the ability of surface temperature together with energy balance to estimate both short-term and long-term ET rates, and then to determine crop water necessity and schedule crop irrigation.This study will be further completed with the application to other biofuel crops such as sunflower and maize, with particular emphasis on their sparse growth phase. Conclusions This work was motivated by an increasing production of energetic crops in semi-arid regions and the need to determine their water requirements.This study focused on the evaluation of a two-source energy model to estimate crop water necessities from radiometric temperature information in a forage sorghum field.Two IRT radiometers were used, together with meteorological data, to run the STSEB model.Measurements in a weighing lysimeter were used to test modelled ET values at both, hourly and daily scales.For a variety of P v and weather conditions, sorghum ET estimations were generally good, and even both very high and very low ET values were quite well captured by the model.Average errors of 22 % and 12 % were obtained for hourly and daily ET values, respectively, and total cumulated ET for the study period was underestimated by 5 %. These results confirm STSEB model as an alternative to water balance techniques to determine short-term and long-term accurate actual evapotranspiration.The presented methodology could be then used to estimate ground-truth ET values, as an alternative to weighing lysimeters, required to determine irrigation needs or to calibrate crop coefficient based algorithms. Figure 1.(a) Location of the experimental site.(b) Lysimeter placed in the center of the 17 Figure 2 . Figure 2. (a) Evolution of the modelled sorghum height during the experiment (line) Figure 4 .Fig. 4 . Figure 4. Diurnal evolution of the sorghum flux components of the energy balance 6
2018-12-21T02:42:55.214Z
2011-10-05T00:00:00.000
{ "year": 2011, "sha1": "165a5e819a2748036da1f521d684a85a70d5ab5e", "oa_license": "CCBY", "oa_url": "https://hess.copernicus.org/articles/15/3061/2011/hess-15-3061-2011.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "165a5e819a2748036da1f521d684a85a70d5ab5e", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
271125336
pes2o/s2orc
v3-fos-license
Merging Data with Modeling: An Example from Fatigue It is well known that errors are inevitable in experimental observations, but it is equally unavoidable to eliminate errors in modeling the process leading to the experimental observations. If estimation and prediction are to be done with reasonable accuracy, the accumulated errors must be adequately managed. Research in fatigue is challenging because modeling can be quite complex. Furthermore, experimentation is time-consuming, which frequently yields limited data. Both of these exacerbate the magnitude of the potential error. The purpose of this paper is to demonstrate a procedure that combines modeling with independent experimental data to improve the estimation of the cumulative distribution function (cdf) for fatigue life. Subsequently, the effect of intrinsic error will be minimized. Herein, a simplified fatigue crack growth modeling is used. The data considered are a well-known collection of fatigue lives for an aluminum alloy. For lower applied stresses, the fatigue lives can range over an order of magnitude and up to 107 cycles. For larger applied stresses, the scatter in the lives is considerably reduced. Consequently, modeling must encompass a variety of conditions. The primary conclusion of the effort is that merging independent experimental data with a reasonably acceptable model vastly improves the accuracy of the calibrated cdfs for fatigue life, given the loading conditions. This allows for improved life estimation and prediction. For the aluminum data, the calibrated cdfs are shown to be quite good by using statistical goodness-of-fit tests, stress-life (S-N) analysis, and confidence bounds estimated using the mean square error (MSE) method. A short investigation into the effect of sample size is also included. Thus, the proposed methodology is warranted. Introduction Error analysis has been employed for quite some time, especially for complex engineering problems.Typical issues associated with error analysis are determining how errors combine, how errors propagate, or, more importantly, how errors are mitigated.Uncertainty can occur either by systematic or random errors.Frequently, propagation of uncorrelated errors is assessed by using the square root of the sum of the squares of the errors.The difficulty in this type of analysis is determining how many contributors to the total uncertainty need to be considered.Useful presentations of error analysis are contained in references [1][2][3][4][5].An article that uses error analysis as a teaching tool is [6].The content is well presented, and there is an excellent reference section.A more recent journal article that uses error analysis is [7].The authors provide a concise review of the basic approaches for error analysis: the reliability state function, the first-order second-moment method, the response surface method, and sensitivity analysis of errors.For the mechanical system that they considered, they were successful with their approach.They also provide a valuable reference list.It goes without saying that error analysis depends on the accuracy of the model and the data.The amount of uncertainty in each variable in the model increases the overall uncertainty.As more sources of uncertainty are considered, the overall uncertainty must increase.Without some control on uncertainty, it can become so large that the results are overshadowed by the accumulated error. The approach to be demonstrated below is a combination of scientifically or physically based modeling with adjustments made by strategically fusing an independent set of experimental data.The method was first developed for modeling the yield strength for an aircraft engine alloy [8].Extensive mechanistic and structural materials modeling was employed to estimate the yield strength.Due to modeling error, even though the scientific model predictions were detailed and thorough, they did not adequately match the rich databases that were to be used for validation for the yield strength prediction.The model was subsequently calibrated by using the additional yield strength data.The result was that, for this application, the calibrated yield strength was excellent for estimation and prediction.A major reason for the success in this problem is that the variability in the simulated model results and the experimental data for the yield strengths are reasonably small.Furthermore, yield strength is time-independent. Fatigue, obviously, is time-dependent, and, consequently, life data tend to have greater variability.The proposed methodology was applied to data from a very high cycle fatigue application for a steel [9].The selected data were primarily for three distinct cases.One case, for relatively high applied load, exhibited failure induced by surface abnormalities only.The amount of scatter was comparatively small.For another case, with moderate applied load, the failures were primarily induced by surface flaws, but there were a few failures that initiated at subsurface inclusions.In this case, the scatter in fatigue lives was almost four orders of magnitude.The final example was for a smaller applied load for which about half of the failures were surface-induced and the remainder were internally induced.The scatter was about three orders of magnitude.This application included uncertainty from a variety of inputs, and the bimodal behavior is an added complication.Thus, a mechanistic model that characterizes the complexities is necessary because inadequacies or oversights are contributors to uncertainty.As modeling complexity increases, the amount of extra data needed for calibration also increases.The overriding conclusion from this work is that calibration of a viable model with sufficient data improves reliability estimation and prediction.The technique seems to be appropriate and warranted. Based on the success of the above examples, the purpose of this work is to investigate the applicability of the methodology for two other sets of fatigue data.The integration of fatigue life data with a mechanistic model is investigated for data given in Shimokawa and Hamaguchi [10].This is a detailed and reputable set of data.These data have been used by others; one recent example is [11].Again, the strategy is to incorporate suitable additional fatigue data with mechanistic modeling to overcome inherent error and to improve subsequent reliability estimations and predictions with statistical confidence. 2024-T4 Aluminum Alloy Shimokawa and Hamaguchi [10] established a rather large collection of fatigue data for 2024-T4 Aluminum Alloy (AA).The specimens were subjected to constant amplitude loading.They conducted tests on rectangular specimens that were 110 mm long, 52 mm wide, and 1 mm thick.The principle focus of their investigation was the effect of different types of notches or holes on fatigue life.The analyses below utilize two of their experimental programs. Center Cut Circular Hole These samples had a center cut circular hole of radius 5 mm.The fatigue data for this condition are summarized in Table 1, which is reproduced from [10] for completeness.Note that ∆σ is the applied stress amplitude, and, for each ∆σ, n is the sample size, x is the sample average, s is the sample standard deviation, and cv is the sample coefficient of variation.For the eight different values of ∆σ, there were a total of 222 fatigue tests conducted.Clearly, the statistical behavior is different when ∆σ is 157 MPa and greater compared to applied loads less than that.For ∆σ that replicate ordinary operations, fatigue lives are usually longer with greater variability.Thus, modeling requires special consideration.The fatigue failure data for the 2024-T4 AA specimens with a center cut circular hole are shown in Figure 1.The data are plotted on two-parameter Weibull probability paper.By observation, a two-parameter Weibull cumulative distribution function (cdf) is acceptable for ∆σ greater than or equal to 177 MPa.For ∆σ less than 177 MPa, a two-parameter Weibull cdf is not acceptable because the tails of the data deviate too much from linearity.In other words, the Anderson-Darling (AD) goodness-of-fit test indicates that a two-parameter Weibull cdf is not acceptable for these data.Possibly, a three-parameter Weibull cdf might be a better choice; however, the AD test implies that the tails of the data for ∆σ, equal to 127 and 137 MPa, are sufficiently different that three-parameter Weibull cdfs are not acceptable.A log-normal cdf was proposed in [10].It was indicated in that paper that the log-normal cdf was a better choice than a two-parameter Weibull cdf.When ∆σ equals 127 or 137 MPa, a log-normal cdf is not acceptable, according to the AD test.All of the above discussion is solely statistical modeling which is empirical, and the selection of a cdf is grounded in pragmatism.It is also appropriate to show the fatigue data in a stress-life (S-N) format.Figure 2 displays the S-N data for the specimens with a center cut circular hole.As mentioned above, it is clear graphically that the scatter is nearly the same for each ∆σ above 150 MPa, but below 150 MPa, the scatter is significantly greater.For the two smallest values of ∆σ, the scatter exceeds an order of magnitude.lives are usually longer with greater variability.Thus, modeling requires special consideration. The fatigue failure data for the 2024-T4 AA specimens with a center cut circular hole are shown in Figure 1.The data are plotted on two-parameter Weibull probability paper.By observation, a two-parameter Weibull cumulative distribution function (cdf) is acceptable for Δσ greater than or equal to 177 MPa.For Δσ less than 177 MPa, a two-parameter Weibull cdf is not acceptable because the tails of the data deviate too much from linearity.In other words, the Anderson-Darling (AD) goodness-of-fit test indicates that a two-parameter Weibull cdf is not acceptable for these data.Possibly, a three-parameter Weibull cdf might be a better choice; however, the AD test implies that the tails of the data for Δσ, equal to 127 and 137 MPa, are sufficiently different that three-parameter Weibull cdfs are not acceptable.A log-normal cdf was proposed in [10].It was indicated in that paper that the log-normal cdf was a better choice than a two-parameter Weibull cdf.When Δσ equals 127 or 137 MPa, a log-normal cdf is not acceptable, according to the AD test.All of the above discussion is solely statistical modeling which is empirical, and the selection of a cdf is grounded in pragmatism.It is also appropriate to show the fatigue data in a stress-life (S-N) format.Figure 2 displays the S-N data for the specimens with a center cut circular hole.As mentioned above, it is clear graphically that the scatter is nearly the same for each Δσ above 150 MPa, but below 150 MPa, the scatter is significantly greater.For the two smallest values of Δσ, the scatter exceeds an order of magnitude. Center Cut Notch Another series of fatigue experiments for 2024-T4 AA that were included in [10] were for the same rectangular specimens but with a different notch design.These specimens had a center cut notch 10 mm long with a maximum center width of 3 mm tapered down to a tip of radius 0.25 mm.The loading was perpendicular to the notch design.The fatigue lives are summarized in Table 2.There were nine different values for Δσ, for a total of 252 tests for this specimen type.For Δσ greater than or equal to 147 MPa, the scatter in fatigue lives is about the same; however, when Δσ is less than 147 MPa, the scatter increases as Δσ decreases.Notice the asterisk for x , s, and cv when Δσ is 64 MPa.The reason is that the three maximum fatigue data for this case are censored.Thus, an estimate for the mean and standard deviation cannot be computed by simple averaging, as with the other values for Δσ.An excellent nonparametric estimate for the mean and standard deviation can be obtained by using the Kaplan-Meier estimator for the empirical distribution function.A well-developed presentation of the Kaplan-Meier methodology can be found in [12]. Cycles to Failure Center Cut Notch Another series of fatigue experiments for 2024-T4 AA that were included in [10] were for the same rectangular specimens but with a different notch design.These specimens had a center cut notch 10 mm long with a maximum center width of 3 mm tapered down to a tip of radius 0.25 mm.The loading was perpendicular to the notch design.The fatigue lives are summarized in Table 2.There were nine different values for ∆σ, for a total of 252 tests for this specimen type.For ∆σ greater than or equal to 147 MPa, the scatter in fatigue lives is about the same; however, when ∆σ is less than 147 MPa, the scatter increases as ∆σ decreases.Notice the asterisk for x, s, and cv when ∆σ is 64 MPa.The reason is that the three maximum fatigue data for this case are censored.Thus, an estimate for the mean and standard deviation cannot be computed by simple averaging, as with the other values for ∆σ.An excellent nonparametric estimate for the mean and standard deviation can be obtained by using the Kaplan-Meier estimator for the empirical distribution function.A well-developed presentation of the Kaplan-Meier methodology can be found in [12].For the notched specimens, the fatigue failure data are shown in Figure 3.There are similarities to the data in Figure 1.For the six largest values of ∆σ, a two-parameter Weibull cdf is acceptable; however, for the other three conditions, it would not be acceptable.The scatter in the data coupled with the curvature makes a statistical fit more challenging.Furthermore, when ∆σ is 64 MPa, there are three identical data that were censored, as indicated by the arrow in Figure 3. Thus, the censoring has a significant effect for fitting a cdf for these data.Again, empirically selecting a suitable cdf is nontrivial, and it appears that more involved analysis may be needed.Likewise, Figure 4 is the S-N diagram for specimens with a notch.When ∆σ exceeds 100 MPa, the scatter is similar, but for less than 100 MPa, the scatter increases as ∆σ decreases.Again, the arrow indicates that there are three similar data that were censored. Materials 2024, 17, x FOR PEER REVIEW 5 of 18 For the notched specimens, the fatigue failure data are shown in Figure 3.There are similarities to the data in Figure 1.For the six largest values of Δσ, a two-parameter Weibull cdf is acceptable; however, for the other three conditions, it would not be acceptable.The scatter in the data coupled with the curvature makes a statistical fit more challenging.Furthermore, when Δσ is 64 MPa, there are three identical data that were censored, as indicated by the arrow in Figure 3. Thus, the censoring has a significant effect for fitting a cdf for these data.Again, empirically selecting a suitable cdf is nontrivial, and it appears that more involved analysis may be needed.Likewise, Figure 4 is the S-N diagram for specimens with a notch.When Δσ exceeds 100 MPa, the scatter is similar, but for less than 100 MPa, the scatter increases as Δσ decreases.Again, the arrow indicates that there are three similar data that were censored.Cycles to Failure A Fatigue Crack Growth Model for the 2024-T4 AA Data The selection of an acceptable mechanistic model for any fatigue problem is difficult.This is equally true for the two examples considered herein.A nontrivial reason for this is that the experiments reported in [10] were conducted about four decades ago.Nevertheless, a simplified fatigue crack growth model is proposed.Given a crack length of a for N cycles, the crack growth rate da/dN is assumed to be characterized by the following equation: where ∆K is the driving force and ∆K th is the threshold.The materials constants for 2024-T4 AA are C and ρ.For the 2024-T4 AA considered, ρ is assumed to be 3.33. Center Cut Circular Hole Failure is assumed to be caused by a semi-circular surface crack that transitions into a through-the-thickness crack.Thus, ∆K differs for the two regimes.The driving force for a surface crack (sc) ∆K sc is assumed to be the following: where 2.24/π is the geometric factor for a semi-circular crack in an infinite plate, and k t is the stress concentration factor for the hole.Using Figure 2.59 in [13] for an estimated value for the stress concentration factor for the test specimens, k t is 2.5.Similarly, the driving force for a through-the-thickness crack (tc) ∆K tc is where r o is the radius of the hole.Numerical values for F tc (a/r o ) for an infinite plate under uniaxial tension containing a circular hole with a single through crack emanating from the hole perpendicular to the loading axis can be fit empirically, to within graphical resolution, by the following function: See reference [14].Equations ( 3) and ( 4) were used for simplicity and computational convenience. The fatigue life N f is the sum of the cycles needed for the surface crack growth N sc and the through-the-thickness crack growth N tc , i.e., where a o is the initial damage size, a tc is the crack size at which the surface crack transitions into a through-the-thickness crack, and a f is the final crack size.The integrals in Equation ( 5) are clearly from Equation (1).With ∆K sc given in Equation ( 2), the first integral can be explicitly integrated.With ∆K tc defined by Equation ( 3), the second has to be integrated numerically.It is assumed that a surface crack transitions into a through-the-thickness crack at a tc , which is the solution of i.e., For the specimens under consideration, r o is 5 mm, which implies that a tc is 2.31 mm.Since the width of the specimen is 52 mm, a f is set to be 21 mm. The variables C, ∆K th , and a o are assumed to be random variables (rvs) that characterize the variability in the microstructural properties of the material.They are also assumed to be independent of the loading and time.A three-parameter Weibull cdf has been used frequently to represent material properties, and it is used for these rvs.The form used herein is given by where α is the shape parameter, β is the scale parameter, and γ is the minimum.An important property of Equation ( 8) is the mean µ, which is where Γ (•) is the Gamma function.Another significant characteristic is the cv, which is The estimated values for the parameters for the rvs are based on a conglomeration of data for 2024-T3; see [15][16][17][18][19][20].It is assumed that the material properties are sufficiently close for those of 2024-T4 that they can be used for the ensuing analyses.Table 3 contains the parameter values for the rvs used for the subsequent computations.Figure 5 shows the fatigue failure data for ∆σ equal to 123, 137, and 206 MPa, which are also in Figure 1.The dashed lines are the simulated model cdfs developed above, which is entirely independent of the fatigue lives.The model is quite good when ∆σ is 123 MPa.When ∆σ is 137 or 206, the model is not suitable at all.In fact, the model and the data have a maximum deviation of almost an order of magnitude when ∆σ is 206 MPa.For the other values of ∆σ shown in Figure 1, the model is likewise not appropriate.Consequently, an alternative approach is required. Materials 2024, 17, x FOR PEER REVIEW 8 of 18 (1 1 / ),  =  +  +  (9) where Γ (•) is the Gamma function.Another significant characteristic is the cv, which is The estimated values for the parameters for the rvs are based on a conglomeration of data for 2024-T3; see [15][16][17][18][19][20].It is assumed that the material properties are sufficiently close for those of 2024-T4 that they can be used for the ensuing analyses.Table 3 contains the parameter values for the rvs used for the subsequent computations.Figure 5 shows the fatigue failure data for Δσ equal to 123, 137, and 206 MPa, which are also in Figure 1.The dashed lines are the simulated model cdfs developed above, which is entirely independent of the fatigue lives.The model is quite good when Δσ is 123 MPa.When Δσ is 137 or 206, the model is not suitable at all.In fact, the model and the data have a maximum deviation of almost an order of magnitude when Δσ is 206 MPa.For the other values of Δσ shown in Figure 1, the model is likewise not appropriate.Consequently, an alternative approach is required. Center Cut Notch For this case, the stress concentration factor used in [10] is 3.8, and that is assumed for the ensuing computations as well.The failure, again, is assumed to be caused by a and the corresponding simulated model for the given ∆σ. Center Cut Notch For this case, the stress concentration factor used in [10] is 3.8, and that is assumed for the ensuing computations as well.The failure, again, is assumed to be caused by a semi-circular surface crack that emanates from the notch.Since the through-the-thickness portion of the crack growth was relatively insignificant for the center cut circular hole computation, it has been omitted for this case.Thus, ∆K is assumed to have the same form as Equation (2).The rvs C and ∆K th are assumed to have the same cdfs as above because they are characteristic of the material properties.For a o , however, the surface area from which a crack emanates for the center cut circle specimens is about 20 times greater than that for the center cut notch specimens.Consequently, the cdf for a o is adjusted.The mean is increased to 19.9 × 10 −6 m and the cv is reduced to 10.4%.As the surface area under high stress decreases, the critical size for the crack initiation increases, but with fewer such sites in the field.Clearly, this needs to be verified. In Figure 6, the fatigue failure data shown are for ∆σ equal to 64, 118, and 206 MPa.These data are also part of Figure 3. Again, the dashed lines are the cdfs computed by simulating the model, which is an independent computation from the experimental fatigue lives shown.When ∆σ is 64 MPa, the model is graphically quite good.Recall that the arrow indicates censored data.For the other two cases shown, ∆σ is 118 or 206, the model represents the data quite poorly.Likewise, for the other values of ∆σ shown in Figure 3, the model is unacceptable.As indicated with the center cut circular hole data, a different tactic is needed.semi-circular surface crack that emanates from the notch.Since the through-the-thickness portion of the crack growth was relatively insignificant for the center cut circular hole computation, it has been omitted for this case.Thus, ΔK is assumed to have the same form as Equation ( 2).The rvs C and ΔKth are assumed to have the same cdfs as above because they are characteristic of the material properties.For ao, however, the surface area from which a crack emanates for the center cut circle specimens is about 20 times greater than that for the center cut notch specimens.Consequently, the cdf for ao is adjusted.The mean is increased to 19.9 × 10 −6 m and the cv is reduced to 10.4%.As the surface area under high stress decreases, the critical size for the crack initiation increases, but with fewer such sites in the field.Clearly, this needs to be verified. In Figure 6, the fatigue failure data shown are for Δσ equal to 64, 118, and 206 MPa.These data are also part of Figure 3. Again, the dashed lines are the cdfs computed by simulating the model, which is an independent computation from the experimental fatigue lives shown.When Δσ is 64 MPa, the model is graphically quite good.Recall that the arrow indicates censored data.For the other two cases shown, Δσ is 118 or 206, the model represents the data quite poorly.Likewise, for the other values of Δσ shown in Figure 3, the model is unacceptable.As indicated with the center cut circular hole data, a different tactic is needed. Model Calibration for Fatigue Life Analysis The modeling for the cdfs shown in Figures 5 and 6 is excellent for the smallest applied load, when Δσ is 123 MPa for the center cut circular hole case and when Δσ is 64 MPa for the center cut notch condition; however, for the others, they are poor representations of the experimental data.Fortunately, these data for each Δσ are independent of the modeling, and they are available to augment the modeling results.The proposed approach to control the difference between the fatigue data and the model simulations is a straightforward empirical calibration.For each given value of the applied stress Δσ, let Ni be an experimental fatigue life out of a total of n, and similarly, let Yj be one of the m simulated values for the model.Because the magnitude of fatigue lives for the data and model simulations are so large, and because they frequently exhibit substantial scatter, they are transformed initially using the natural logarithm.That is, let LNi and LYj be ln(Ni) and ln(Yj), respectively.The transformation LZj that is applied to LYj consists of a rotation Model Calibration for Fatigue Life Analysis The modeling for the cdfs shown in Figures 5 and 6 is excellent for the smallest applied load, when ∆σ is 123 MPa for the center cut circular hole case and when ∆σ is 64 MPa for the center cut notch condition; however, for the others, they are poor representations of the experimental data.Fortunately, these data for each ∆σ are independent of the modeling, and they are available to augment the modeling results.The proposed approach to control the difference between the fatigue data and the model simulations is a straightforward empirical calibration.For each given value of the applied stress ∆σ, let N i be an experimental fatigue life out of a total of n, and similarly, let Y j be one of the m simulated values for the model.Because the magnitude of fatigue lives for the data and model simulations are so large, and because they frequently exhibit substantial scatter, they are transformed initially using the natural logarithm.That is, let LN i and LY j be ln(N i ) and ln(Y j ), respectively.The transformation LZ j that is applied to LY j consists of a rotation and translation, so that the sample averages and sample standard deviations of the LN i and LZ j collections are identical.This is accomplished by the following equations: where where LN and LY are the sample averages, and s LN and s LY are the sample standard deviations of {LN i : 1 ≤ i ≤ n} and {LY i : 1 ≤ i ≤ m}, respectively.To return to actual cycles, the LZ j values are transformed by applying the exponential function. Center Cut Circular Hole Figure 7 shows the fatigue data for the center cut circular hole specimens, which are also in Figure 1.In addition, the solid lines are the calibrated cdfs as described above.Visually, all of the calibrated cdfs characterize the data quite well.A comparison of the model cdf in Figure 5 with the calibrated cdf in Figure 7 when ∆σ is 123 MPa indicates very little difference.Undoubtedly, if the model is accurate, there is little need for any calibration.When ∆σ is 137 MPa or 206 MPa, however, the contrast between the model cdfs and the calibrated cdfs is striking.It clearly demonstrates the need for the translation and rotation in Equation (11).The Kolmogorov-Smirnov (KS) and AD goodness of fit tests were applied to validate the quality of the calibrated cdfs.The largest KS test statistic for the eight different values of ∆σ is 0.18, which indicates that the calibrated cdf is acceptable for each value of ∆σ for any significance level less than 0.20.The KS test primarily reflects the behavior of the central region of the data.The AD test implies that the cdfs are acceptable for the same significance level for each ∆σ except 137 and 157 MPa.These two cdfs are not acceptable, according to the AD test.The reason for this observation is that the AD test describes the behavior in the tails of the cdf.This is apparent in Figure 7 because the tails are quite distinct from the fatigue data.In both cases, however, the cdfs are on the conservative side of the data, and would serve as suitable cdfs for prediction.Therefore, the calibrated cdfs are acceptable representations of the fatigue data.Because the calibrated cdfs are a combination of basic mechanistic modeling and experimental fatigue data, they are appropriate for estimation and prediction beyond the data range, especially for applied loads that represent typical operating conditions. Another way in which to assess the validity of the calibrated cdfs is to consider the S-N behavior.Consider Figure 8, which is a reproduction of Figure 2 with estimated percentile lines added.These percentiles are taken directly from the calibrated cdfs shown in Figure 7.The solid line consists of the estimated medians.Because the calibrated cdfs are excellent representations of the central portion of the data, the estimated medians are also quite good.The dashed lines are the estimated 99% confidence bounds.The upper bound is the estimated 99.5 percentile and the lower bound is the 0.5 percentile computed from the calibrated cdfs.All the data lie between the bounds, and the bounds are very tight.Using the calibrated cdfs provides an excellent characterization of the fatigue data.As a final comment, it should be noted that the sharp corner on the lower bound when ∆σ is 127 MPa corresponds to the difference in the calibrated cdf and the data in the lower tail in Figure 7.Because the calibrated cdf is conservative, the lower bound is appropriate. Center Cut Notch As with the above example, Figure 9 contains the fatigue data for the center cut notch specimens, which were also shown in Figure 3, and the calibrated cdfs for each value of Δσ.Recall that the arrow in Figure 9 indicates that there are three censored data.Graphically, the calibrated cdfs characterize the data quite well.Except for the smallest data Center Cut Notch As with the above example, Figure 9 contains the fatigue data for the center cut notch specimens, which were also shown in Figure 3, and the calibrated cdfs for each value of Δσ.Recall that the arrow in Figure 9 indicates that there are three censored data.Graphically, the calibrated cdfs characterize the data quite well.Except for the smallest data Center Cut Notch As with the above example, Figure 9 contains the fatigue data for the center cut notch specimens, which were also shown in Figure 3, and the calibrated cdfs for each value of ∆σ.Recall that the arrow in Figure 9 indicates that there are three censored data.Graphically, the calibrated cdfs characterize the data quite well.Except for the smallest data when ∆σ is 98 MPa and the largest data when ∆σ is 78 MPa, the data are close to the calibrated cdfs.As with the center cut circular hole case when ∆σ is the smallest, i.e., 123 MPa, when ∆σ is 64 MPa, the difference between the model cdf and the calibrated cdf is very small.To further assess the goodness of fit of the calibrated cdfs, the KS and AD tests were used.The KS test indicates that all of the calibrated cdfs are acceptable for any significance less than 0.20.The AD test infers that the calibrated cdfs are acceptable for any significance value less than 0.20, except when ∆σ is 98 MPa, 147 MPa, and 216 MPa.The AD test implies that the calibrated cdf is not acceptable when ∆σ is 98 MPa.This is clearly seen in Figure 9 because the tails are not very close to the calibrated cdf.Finally, the calibrated cdfs for ∆σ equal to 147 MPa and 216 MPa are acceptable for a significance of 0.05.All things considered, the calibrated cdfs are acceptable as estimates for the center cut notch fatigue data, except for when ∆σ is 98 MPa. AD test implies that the calibrated cdf is not acceptable when Δσ is 98 MPa.This is clearly seen in Figure 9 because the tails are not very close to the calibrated cdf.Finally, the calibrated cdfs for Δσ equal to 147 MPa and 216 MPa are acceptable for a significance of 0.05.All things considered, the calibrated cdfs are acceptable as estimates for the center cut notch fatigue data, except for when Δσ is 98 MPa. Before continuing, recall that because of the censored data when Δσ equals 64 MPa, the sample average and standard deviation were estimated by using the Kaplan-Meier methodology [12].They are recorded in Table 2.These estimates were used in the calibration; see Equation (12).The inference is that the proposed calibration approach is also suitable when censored data are part of the results.In fact, the methodology requires no modification as long as the sample average and standard deviation can be suitably estimated. Figure 10 shows the S-N data from Figure 4 with the estimated percentile lines.As before, the median behavior is the solid line, and the 99% confidence bounds are the dashed lines; all of these are obtained from the calibrated cdfs shown in Figure 9.The estimated medians are excellent.The confidence bounds characterize the data well.Not only are they close to the data, but they also reflect the scatter for each given value of Δσ.For the censored data, it is conceivable that the actual life is outside the confidence bounds.Even if this were the case, the bounds are excellent because they are conservative.Fatigue failure data for 2024-T4 AA specimens with a center cut notch [10], and the corresponding calibrated cdfs for the given ∆σ. Before continuing, recall that because of the censored data when ∆σ equals 64 MPa, the sample average and standard deviation were estimated by using the Kaplan-Meier methodology [12].They are recorded in Table 2.These estimates were used in the calibration; see Equation (12).The inference is that the proposed calibration approach is also suitable when censored data are part of the results.In fact, the methodology requires no modification as long as the sample average and standard deviation can be suitably estimated. Figure 10 shows the S-N data from Figure 4 with the estimated percentile lines.As before, the median behavior is the solid line, and the 99% confidence bounds are the dashed lines; all of these are obtained from the calibrated cdfs shown in Figure 9.The estimated medians are excellent.The confidence bounds characterize the data well.Not only are they close to the data, but they also reflect the scatter for each given value of ∆σ.For the censored data, it is conceivable that the actual life is outside the confidence bounds.Even if this were the case, the bounds are excellent because they are conservative. Figure 10.S-N data for 2024-T4 AA; specimens with a center cut notch [10], and estimated median and 99% confidence bounds from the calibrated cdfs. Mean Square Error Analysis Mean Square Error (MSE) analysis is a well-known methodology to assess the validity of an estimation.The error ei is the difference between the calibrated cdf and the fatigue data.The MSE is given by . Approximating confidence bounds with the MSE is typically done by using the square root of the MSE in Equation (13).Let σMSE be the square root of the MSE, which can be taken as an estimate for the standard deviation.For unbiased error distributions, the standard error is equivalent to σMSE; see reference [21].Additional information for the MSE can be found in [22].When ei is epitomized by a normal cdf, 95% confidence bounds are estimated by adding and subtracting 2σMSE from the calibrated cdf. Figure 11 shows the fatigue data for three values of Δσ for the center cut notch specimens along with the calibrated cdfs from Figure 9.The three examples shown represent the range of accuracy for the calibrated cdfs.The dashed lines for each case are the estimated 95% MSE confidence bounds.Clearly, the bounds encompass the data in each case.The widths of the bounds are dependent on the accuracy of the calibrated cdf.When Δσ is 118 MPa, the calibrated cdf is an excellent approximation for the data.The average error is only 970 cycles and the corresponding σMSE is 8600 cycles.For this case, ±2σMSE is only about 5% of the median behavior.Consequently, the bounds reflect the data extremely well.The calibrated cdf is not as close to the data when Δσ is 98 MPa, especially in the lower tail.Here, the average error is 3100 cycles, the corresponding σMSE is 47,500 cycles, and ±2σMSE is about 12% of the median.Also, the difference between the calibrated cdf and the lower confidence bound is the same as that between the calibrated cdf and the smallest fatigue data.This difference is basically 2σMSE.For the center cut notch specimens, the MSE confidence bounds are very good for each Δσ with a complete set of fatigue data, Cycles to Failure .S-N data for 2024-T4 AA; specimens with a center cut notch [10], and estimated median and 99% confidence bounds from the calibrated cdfs. Mean Square Error Analysis Mean Square Error (MSE) analysis is a well-known methodology to assess the validity of an estimation.The error e i is the difference between the calibrated cdf and the fatigue data.The MSE is given by Approximating confidence bounds with the MSE is typically done by using the square root of the MSE in Equation (13).Let σ MSE be the square root of the MSE, which can be taken as an estimate for the standard deviation.For unbiased error distributions, the standard error is equivalent to σ MSE ; see reference [21].Additional information for the MSE can be found in [22].When e i is epitomized by a normal cdf, 95% confidence bounds are estimated by adding and subtracting 2σ MSE from the calibrated cdf. Figure 11 shows the fatigue data for three values of ∆σ for the center cut notch specimens along with the calibrated cdfs from Figure 9.The three examples shown represent the range of accuracy for the calibrated cdfs.The dashed lines for each case are the estimated 95% MSE confidence bounds.Clearly, the bounds encompass the data in each case.The widths of the bounds are dependent on the accuracy of the calibrated cdf.When ∆σ is 118 MPa, the calibrated cdf is an excellent approximation for the data.The average error is only 970 cycles and the corresponding σ MSE is 8600 cycles.For this case, ±2σ MSE is only about 5% of the median behavior.Consequently, the bounds reflect the data extremely well.The calibrated cdf is not as close to the data when ∆σ is 98 MPa, especially in the lower tail.Here, the average error is 3100 cycles, the corresponding σ MSE is 47,500 cycles, and ±2σ MSE is about 12% of the median.Also, the difference between the calibrated cdf and the lower confidence bound is the same as that between the calibrated cdf and the smallest fatigue data.This difference is basically 2σ MSE .For the center cut notch specimens, the MSE confidence bounds are very good for each ∆σ with a complete set of fatigue data, including ∆σ equal to 98 MPa.The MSE analysis is another validation of the proposed methodology. magnitude.For Δσ equal to 127 MPa, the error between the minimum data and the calibrated cdf is over 150,000 cycles.At the maximum data, the error is over 1,050,000 cycles.Altogether, the average error is 52,000 cycles and σMSE is 340,000 cycles.When Δσ is 123 MPa, there are three data points near the median where ei is quite large.The MSE analysis is not as robust for the center cut circular hole; nevertheless, it lends credibility to the proposed methodology.Fatigue failure data for 2024-T4 AA specimens with a center cut notch [10] for selected values of Δσ, the corresponding calibrated cdfs, and MSE confidence bounds. Sample Size for Calibration In experimental work, the overriding issue is the number of tests required to adequately characterize the property being investigated.One of the best and most complete professional guidelines for material properties that indicates the acceptable sample size for a qualified experimental program is MMPDS [23], which is a scientifically developed procedure for metallic materials to assess experimental and design data so that they are acceptable for certification.MMPDS is a joint effort of government agencies and industrial, educational, and international aerospace organizations.Specifically, chapter 9 is related to statistical analysis.In Section 9.9.1.1,the comment is made that for fatigue experimentation subjected to load-controlled conditions, each load should include at least six The MSE analysis for confidence bounds for the center cut circular hole data is essentially the same.When ∆σ is greater than 157 MPa, the calibrated cdfs are excellent fits to the fatigue data; see Figure 7.For these four, the MSE confidence bounds are very tight and envelop all the data, like the example when ∆σ is 118 MPa in Figure 11.For ∆σ equal to 137 MPa, the MSE confidence bounds contain the data, but they are wider because of the deviation in the tails of the cdf.The σ MSE is 43,000 cycles, and ±2σ MSE is about 17% of the median.Similarly, when ∆σ is 157 MPa, the MSE bounds encompass the data and are rather tight because σ MSE is only 7200 cycles, and ±2σ MSE is only 5% of the median.For the remaining two values of ∆σ, the MSE is not acceptable because the error is so large in magnitude.For ∆σ equal to 127 MPa, the error between the minimum data and the calibrated cdf is over 150,000 cycles.At the maximum data, the error is over 1,050,000 cycles.Altogether, the average error is 52,000 cycles and σ MSE is 340,000 cycles.When ∆σ is 123 MPa, there are three data points near the median where e i is quite large.The MSE analysis is not as robust for the center cut circular hole; nevertheless, it lends credibility to the proposed methodology. Sample Size for Calibration In experimental work, the overriding issue is the number of tests required to adequately characterize the property being investigated.One of the best and most complete professional guidelines for material properties that indicates the acceptable sample size for a qualified experimental program is MMPDS [23], which is a scientifically developed procedure for metallic materials to assess experimental and design data so that they are acceptable for certification.MMPDS is a joint effort of government agencies and industrial, educational, and international aerospace organizations.Specifically, chapter 9 is related to statistical analysis.In Section 9.9.1.1,the comment is made that for fatigue experimentation subjected to load-controlled conditions, each load should include at least six observations to failure.While this is a good rule of thumb, it may not be sufficient to fully characterize the scatter for a given load condition. As an example, consider the center cut notch when ∆σ is 69 MPa, which is shown in Figures 3,9 and 11.This example is chosen because there is substantial scatter in the data and the calibrated cdf is an excellent fit.The sample size is 30.The main purpose of this effort was to demonstrate that the calibration method is effective and warranted.The query is whether or not less than 30 data points would have been just as effective for the calibration.Figure 12 shows the fatigue lives when ∆σ equals 69 MPa, the calibrated cdf, and the MSE confidence bounds, which were also shown in Figure 11.The only difference is that the axis for the cycles has been expanded for the graph in Figure 12.Arbitrarily, 15 out of the 30 data points were randomly selected and used to calibrate the cdf.The white data are the ones that were randomly chosen.The corresponding calibrated cdf using just the 15 randomly selected data points is represented by the short-long dashed line.Graphically, it is reasonably similar to the cdf calibrated using all 30 data points; however, there is some deviation in the lower tail.In fact, the KS and AD tests, comparing the entire sample with the augmented cdf, indicate that it is acceptable for any level of significance less than 0.2.The MSE confidence bounds, shown as the short-short dashed lines, are a bit wider, but not overly so.Thus, a sample size of only 15 may have been acceptable.Alas, caution must be exercised because the random sample shown is excellent because the 15 data points are widely distributed over the entire sample of 30.Due to randomness, the 15 selected data points could have primarily reflected the upper tail, which would not have adequately served for calibration.Further analysis is required prior to making a definitive statement about the sample size.Fatigue failure data for 2024-T4 AA specimens with a center cut notch [10] for Δσ equal to 69 MPa.The calibrated cdfs and MSE confidence bounds using all 30 data points and 15 randomly selected data points are shown. Results and Discussion The purpose of this effort was to demonstrate the validity and value of calibrating a cdf for fatigue life with independent experimental data.The cdf is generated from a probabilistic fatigue crack growth model using standard simulation methods.Even though the model is somewhat simplistic, the proposed methodology yields convincing results.The fatigue data considered were taken from [10].Two different types of specimens, based on different center cut features, were used in the analysis.One set of experiments were conducted with specimens with a center cut circular hole, and the other set used a center cut notch design.There were eight different values for the stress amplitude Δσ for the center cut circular hole specimens, and nine different ones for the center cut notch specimens.An extremely significant feature of these data sets is that the amount of data for each value of Δσ is noteworthy.All things considered, the methodology produces excellent results for estimation and prediction of the fatigue behavior.A primary motive for this process is Fatigue failure data for 2024-T4 AA specimens with a center cut notch [10] for ∆σ equal to 69 MPa.The calibrated cdfs and MSE confidence bounds using all 30 data points and 15 randomly selected data points are shown. To add a bit more understanding about the required sample size for quality calibration, a random selection from the 30 fatigue data points for ∆σ equal to 69 MPa was repeated 1000 times.The size of the random sample was 10, 15, or 20.It should be noted that the total number of ways to select 10 or 20 data points from 30 is over 30 million, and the number of ways to choose 15 out of 30 is over 115 million.Thus, repeating the calibrations 1000 times will not lead to duplications.As expected, when only 10 data points are used for the calibration the ensuing cdf may not be acceptable.Out of the different attempts, the KS test implied that 4.9% would be unacceptable.Of the remainder, 72.6% would be acceptable for any significance below 0.2, and the rest would be acceptable with smaller a significance.The AD test was more severe because 53.0% of the calibrated cdfs were unacceptable, and only 20.7% were acceptable with a significance of 0.2.If 15 data points are used to calibrate the cdf, the results improve.Less than 1% are unacceptable, according to the KS test, but 31% are still unacceptable using the AD test.Using 20 randomly selected data points for the calibration improves the results considerably.The KS test indicates that 100% of the cdfs are acceptable with any significance less than 0.2.Because the tail behavior is more challenging, the AD test yields 11.0% that are unacceptable, and 60.9% that are acceptable with a significance less than 0.2, and the remaining attempts are acceptable with a smaller level of significance. Certainly, the more data that are available, the better the calibration will be.The scatter in the data is not that large when ∆σ is 69 MPa.Thus, fewer data may be sufficient for the calibration.In fact, 25 to 30 data points seems to be appropriate.When there is more scatter in the data, there may need to be more data in order to achieve an acceptable calibration.For example, the lower tail of the data when ∆σ equals to 98 MPa is sufficiently different from the calibrated cdf that additional data would be helpful.It is difficult, a priori, to select an appropriate sample size for fatigue testing, but 30 tests for each loading condition is an excellent beginning. Results and Discussion The purpose of this effort was to demonstrate the validity and value of calibrating a cdf for fatigue life with independent experimental data.The cdf is generated from a probabilistic fatigue crack growth model using standard simulation methods.Even though the model is somewhat simplistic, the proposed methodology yields convincing results.The fatigue data considered were taken from [10].Two different types of specimens, based on different center cut features, were used in the analysis.One set of experiments were conducted with specimens with a center cut circular hole, and the other set used a center cut notch design.There were eight different values for the stress amplitude ∆σ for the center cut circular hole specimens, and nine different ones for the center cut notch specimens.An extremely significant feature of these data sets is that the amount of data for each value of ∆σ is noteworthy.All things considered, the methodology produces excellent results for estimation and prediction of the fatigue behavior.A primary motive for this process is to improve the characterization and accuracy of the cdf for fatigue life given ∆σ.The calibration of the model cdf with data drastically improves the estimation because the uncertainty is controlled empirically.Certainly, as the modeling is improved, the overall accuracy is likewise better, and the reliance on the data for the calibration is diminished.This is illustrated in Figure 7 when ∆σ is 123 MPa, and in Figure 9 when ∆σ is 64 MPa. For the fatigue model, it was assumed that three rvs and their associated cdfs were sufficient to capture the majority of the variability.Even so, it was shown that most of the simulated cdfs were not an accurate characterization of the fatigue life.In fact, some cdfs differed from the corresponding data by almost an order of magnitude.Further improvements in modeling could alleviate these discrepancies.Even for cdfs when the scatter in the data was relatively small, there were significant differences. The proposed calibration method was demonstrated to be quite useful for the two different types of specimens and the multiple values of ∆σ for each.The validation for the approach was strongly established for all but three of the values of ∆σ; however, even those three were well characterized by the S-N behavior.Nevertheless, the fatigue life model could be improved, which would lead to even more accurate calibrations.From this effort, the proposed methodology appears to be warranted.The approach should be implemented for additional applications to determine its full capability. A few final comments are in order regarding the sample size needed for the calibration.For fatigue experimentation, especially for critical load bearing components, 30 tests for the key loading conditions is an excellent rule of thumb.If an accurate mechanical model for fatigue can be established, then possibly as few as 15 tests may be sufficient.The sample Figure 5 . Figure 5.Selected fatigue failure data for 2024-T4 AA specimens with a center cut circular hole[10], and the corresponding simulated model for the given Δσ. Figure 5 . Figure 5.Selected fatigue failure data for 2024-T4 AA specimens with a center cut circular hole[10], and the corresponding simulated model for the given ∆σ. Figure 6 . Figure 6.Selected fatigue failure data for 2024-T4 AA specimens with a center cut notch[10], and the corresponding simulated model for the given Δσ. Figure 6 . Figure 6.Selected fatigue failure data for 2024-T4 AA specimens with a center cut notch[10], and the corresponding simulated model for the given ∆σ. Figure 7 . Figure7.Fatigue failure data for 2024-T4 AA specimens with a center cut circular hole[10], and the corresponding calibrated cdfs for the given Δσ. Figure 8 . Figure8.S-N data for 2024-T4 AA; specimens with a center cut circular hole[10], and estimated median and 99% confidence bounds from the calibrated cdfs. Figure 7 . Figure7.Fatigue failure data for 2024-T4 AA specimens with a center cut circular hole[10], and the corresponding calibrated cdfs for the given Δσ. Figure 8 . Figure8.S-N data for 2024-T4 AA; specimens with a center cut circular hole[10], and estimated median and 99% confidence bounds from the calibrated cdfs. Figure 8 . Figure8.S-N data for 2024-T4 AA; specimens with a center cut circular hole[10], and estimated median and 99% confidence bounds from the calibrated cdfs. Figure 10 Figure10.S-N data for 2024-T4 AA; specimens with a center cut notch[10], and estimated median and 99% confidence bounds from the calibrated cdfs. Figure 11 . Figure11.Fatigue failure data for 2024-T4 AA specimens with a center cut notch[10] for selected values of Δσ, the corresponding calibrated cdfs, and MSE confidence bounds. Figure 11 . Figure11.Fatigue failure data for 2024-T4 AA specimens with a center cut notch[10] for selected values of ∆σ, the corresponding calibrated cdfs, and MSE confidence bounds. Figure 12 . Figure12.Fatigue failure data for 2024-T4 AA specimens with a center cut notch[10] for Δσ equal to 69 MPa.The calibrated cdfs and MSE confidence bounds using all 30 data points and 15 randomly selected data points are shown. Figure 12 . Figure12.Fatigue failure data for 2024-T4 AA specimens with a center cut notch[10] for ∆σ equal to 69 MPa.The calibrated cdfs and MSE confidence bounds using all 30 data points and 15 randomly selected data points are shown. Table 3 . Weibull parameters used in the fatigue crack growth model. Table 3 . Weibull parameters used in the fatigue crack growth model.
2024-07-14T15:51:21.202Z
2024-07-01T00:00:00.000
{ "year": 2024, "sha1": "102fa15e50f6ff828989b6062d24f95bc3e78051", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "ddc42a33cffee4c779b69d395c3ec76daeb64d2b", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
225290744
pes2o/s2orc
v3-fos-license
Technology Socialization Process of Pulse Enterprise: The Structural and Functional Analysis Technology socialization process has operationally been defined as the interactive summation of all possible responses to a technology application process in terms of adoption, rejection, discontinuance and reinvention. Here, this interactive summation is measured against a set of standard practices applied in pulse enterprises and the level of socialization as measured against a “recommended technology”. The following specific objectives are set to intervene the present study. Those are, to generate basic information on socialization of pulse crop in the study area to identify and standardize the variables, dependent and independent, impacting on both socialization of pulse crop in the study area, to elucidate inter and intra level interaction between dependent variables i.e. Socialization with those of selected socio economic and ecological variables, to delineate the micro level policy based on the empirical result on effective socialization process. The study has been carried out in two developed block namely Chakdah and Haringhata of Nadia District in West Bengal. The multistage purposive and random sample techniques were the key to Original Research Article Pal et al.; IJECC, 10(9): 170-178, 2020; Article no.IJECC.60053 171 contrast sampling design in the present study. The following variables, gross return, area under pulse cultivation, training received, yield, farmer’s attitude towards pulse cultivation have been found generating significant functional impacts on the predicted character, technology Socialization. The statistical tools like mean, standard deviation and coefficient of variation, coefficient of variation, coefficient of correlation, multiple regression, step down multiple regression and path analysis. The study also responded to the inquiry as to where and how the classical crop production process can be replaced with pulse crop and whereas this replacement will be much rewarding and beneficiary to the common farmer. The determinants like gross return, area under pulse crop, training received, productivity of pulse crop and farmer’s attitude are decisively characterizing the socialization process of pulse crop. INTRODUCTION Pulse provides the green source of protein to millions of Indian and beyond. In India pulse crop have been described as a "poor man's meat and rich man's vegetable". It's a rare type of vegetative protein which retain lysine one of the most important amino acid. As against animal protein, it's the cheaper source of vegetative protein as well. As a crop, it needs less water and nutrient, less cost of investment as well. Indian agriculture cannot fulfill the total pulse requirement, hence, a huge expenditure incurred over pulse import and export t [1]. In India, major pulses like chickpea, lentil and pigeon pea account for 39, 10 and 21% of the total pulse production in the country [2]. The changing climatic conditions have a major impact on rainfed crops including pulses [3]. Pulses are reported to be particularly sensitive to heat stress at the bloom stage; only a few days' exposure of high temperature (30-35ºC) can cause heavy yield losses through flower drop or pod damage [4]. Introducing pulse crops that simultaneously adapt to climate change and contribute to mitigating its effects can be key to increasing resilience to climate change in farming [5]. Pulses themselves are, however, very sensitive to torrential rain, especially in the early vegetative stage and at flowering and a high quantity of rainfall can cause disease infestation in crops [6]. The socialization model, here in this, has been christened as an alternative social process to purvey the transfer process to purvey the transfer process in a multi way channel and to a multidimensional projection. In the same study, the adoption, discontinuance, rejection and reinvention have been conceived as a socio-psychological polymer against a single stimulus i.e. technology exposure [7]. When society is getting increasingly restless owing to a series of non-compliances, conflict, comprehensive direction, mutual denial, disagreement between what we call the imposed knowledge vs. inherent knowledge, exotic knowledge or exotic idea vs. in-situ idea; protected need vs felt need and so on, the 'social entropy or social disorder' is expected to simmer. Before adding new skill or useful technical knowledge, we need to study residual disorder, already created by malfunction of previous technology, and, at the same time, before adding new capacity to community capability, we need to pump out the incapability's already created by sneak of the previous technology [8]. That is why, the technology socialization models an inevitable development over the transfer of technology concept, to critically analyse the sub process and sub consequence like adoption, discontinuance, rejection, and reinvention with a steamed analogy that every human mind is a complex disposition of didactic behaviour, forming what you call diodes of adoption-rejection, adoptiondiscontinuance, invention-reinvention, creationculmination [9]. Technology socialization process has operationally been defined as the interactive summation of all possible feedback to a technology application process in terms of adoption, rejection, discontinuance and reinvention [10]. Here, this interactive summation is measured against a set of standard practices applied in pulse enterprises and the level of socialization ass measured against a "standard technology". The technology and the inputs are used as a material account of means to estimate this complex social and qualitative outcome i.e. Technology Socialization. In the light of the above discussion the researcher had delineated the following specific objectives for the present study. The following specific objectives are set to intervene the present study. Those are, to generate basic information on socialization of pulse crop in the study area to identify and standardize the variables, dependent and independent, impacting on both socialization of pulse crop in the study area, to elucidate inter and intra level interaction between dependent variables i.e. Socialization with those of selected socio economic and ecological variables, to delineate the micro level policy based on the empirical result on effective socialization process. MATERIALS AND METHODS The study has been carried out in two developed block namely Chakdah and Haringhata of Nadia District in West Bengal. Both the district and block are selected purposively due unique nature of the locations terms of technology socialization with a view to the consequence of innovation decision process viz adoption, rejection and discontinuance and market behaviour of pulse enterprise considered for present study. The two villages out of twenty-seven gram panchayats were purposively selected for the present study. An exhaustive list of respondents prepared with help of farmers, shop owner and panchayat officials. From the list one hundred fifty respondents were randomly selected for study. The multistage purposive and random sample techniques were the key to contrast sampling design in the present study. A pilot study was conduct in the selected villages before constructing the data devices to acquaint with the local in terms of the demography and the level of technology socialization and market behaviour of pulse enterprises. The variable socialization of pulse enterprise was considered as the dependent or predicted or consequent variable have been measured in term of extent of adoption, extent of rejection, extent of discontinuance using the scale developed by S.N. Chattopadhyay(1993) which was slightly modified for the requirement of the study. The twenty-seven independent or casual or predictor or antecedent variables selected and operationalized and measured according to their concept and relationship with the dependent variables with the help of exact scales developed by the previous social scientist or by slightly modifying the developed scales for the requirement of the study. The final primary data were collected with the help of structured interview schedule by following the personal interview method. The secondary data were collected by following case study method to throw the light into the intrinsic character of the consequences of the innovation decision process and to establish the conceptual framework of the present study on strong logistic. Correlation Coefficient of Socialization of Pulse Enterprise (Y 1 ) with 27 Independent Variables Fig. 1 presents the correlation coefficient of consequent variable, socialization of pulse enterprise (y) with 27 independent variables. It has been found that variables viz. family size (X 3 ), area under pulse cultivation (bigha) (X 5 ), farmer's attitude towards pulse crop cultivation (x 14 ), knowledge level of farmer towards cultivation of pulse crop (X 15 ), gross return (rs/ bigha) (X 25 ), training received (X 27 ) have recorded the positive and productivity or yield (kg/bigha) (X 24 ) has recorded significant but negative correlation with socialization of pulse enterprise. Result evinces that the socialization of pulse enterprise (Y 1 ) has scaled up for those having higher size of holding and these also been elicited by the farmer having proper attitude and adequate knowledge for the cultivation viz for socialization of pulse enterprise (Y 1 ).Socialization of pulse enterprise (Y 1 ) also has helped the scaling up of gross return became of training received and exposure met subsequently however its interestingly to know that pulse productivity has been better for those having less of exposure to formal socialization programme organization formal institution as well as organization. Table 3 presents the path analysis, by decomposing the total effect (r) of antecedent variables into direct indirect effect and residual effect. Path analysis has been administered to get direction and network of influence of antecedent variables on consequent variable.From the table, it's is clear that variable, gross return (Rs/ Bigha) (X 25 ) has exerted highest direct effect on socialization of pulse enterprise (Y 1 ) followed by area under pulse cultivation (Bigha) (X 5 ) and productivity or yield (kg/Bigha) (X 24 ). In case of indirect effect on socialization of pulse enterprise (Y 1 )have area under pulse cultivation (X 25 ) and followed by gross return (Rs/ Bigha) (X 25 ) and farmer's attitude towards pulse crop cultivation (X 14 ).It is discernible from the table the highest number of variables (21) has routed their substantial indirect effect through the variable, area under pulse cultivation (Bigha) (X 5 ). So, it could be inferred that these variables have got both substantive and associating properties to characterize the socialization of pulse enterprise (Y 1 ).Land resource as endowment is still the most important determinant for socialization for agriculture technology and the number of pulse crop. So better responses have been generated by those having higher land size while these didactic relation needs a support from attitude input from responds towards socialization of pulse enterprise (Y 1 ).The residual effect being 0.2124, it is to conclude that 21.24 per cent of variation in this interaction could not be explained. CONCLUSION This is both the output and outcome of researches with both empirical and social application, vindicated by research and logically nurtured by conclusion; it goes on prescribing to make a pragmatic application of research output, either to solve a problem turning to prospect. Every curve of recommendation means to a logical action, realistic approach and meaningful intervention. The following recommendations, out of the research experience and analyzed information, could now be made. We need comprehensive outlook on technology socialization process. A sceptic study on any of these consequences; adoption, rejection, discontinuances etc. may bring only a cryptic of technology transfer. So, a concurrent study on adoption-rejection-discontinuance-reinvention can only describe the technology socialization process and in totally; Every KVK/ Extension Department/ Research organization etc. should collect rejection and discontinuance data rather than harping on adoption process and demonstrating it in an impostor manner; High cropping intensity does bring not only sequels of adoption but also series of rejection. When one is rejected, the alternatives find an opportunity to be adopted, when one is adopted, then one needs to be culminated make room for newer one. So, a 'redox' mode of interaction (rejection) can standardize to estimate plasma stage of socialization i.e. a mix of adoptionrejection in an interchangeable manner; Socialization of technology involves cost, time and resources. So, every socialization process needs to socially, economically and chronologically audited and catalogued, if possible crop wise and input wise; Every community presents a unique culture echelon which response uniquely to any technology socialization process. So, an integration of social and psychological inputs needs to be rendered measureable through an OVI, objectively verifiable indicator, against a structural formation socialization; In technology transfer process, if it really is attempted, then a second thought needs to be elicited. Ask one to get the best answer -a technology is a character encapsulated with ideas and thought process and certainly does not represent some Kg's of fertilizer wielding so called transfer of technology (TOT). CONSENT As per international standard or university standard guideline participant consent has been collected and preserved by the authors.
2020-08-27T09:04:10.790Z
2020-08-21T00:00:00.000
{ "year": 2020, "sha1": "69e463fb9586edd7ef8f4d039776f84ec8c08e3d", "oa_license": null, "oa_url": "https://www.journalijecc.com/index.php/IJECC/article/download/30238/56744", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "07c3424c0ca5b1f6b91b50ef682dbfb5c702395b", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Computer Science" ] }
267322156
pes2o/s2orc
v3-fos-license
Interaction of 5-HTTLPR and SLE disease status on resting-state brain function Background Neuropsychiatric involvement in systemic lupus erythematosus (SLE) is a common clinical manifestation. In SLE patients, cerebral function is a more sensitive predictor of central nervous system damage, and abnormalities in cerebral function may be apparent before substantial neuropsychiatric symptoms occur. The 5-hydroxynyptamine(5-HT) system has the ability to interact with the majority of the neurochemical systems in the central nervous system (CNS), influencing brain function. Serotonin transporter gene-linked polymorphic region (5-HTTLPR) is an essential element of the 5-HT system gene polymorphism and is directly related to the control of 5-hydroxytryptamine transporter (5-HTT)gene expression. The relationship between 5-HTTLPR and functional brain measurements in SLE patients requires more investigation because it is one of the most attractive imaging genetics targets for shedding light on the pathophysiology of neuropsychiatric lupus. Methods Resting-state functional magnetic resonance imaging (rs-fMRI) images were collected from 51 SLE patients without obvious neuropsychiatric manifestations and 44 healthy volunteers. Regional homogeneity (ReHo), amplitude of low-frequency fluctuations (ALFF), and fractional amplitude of low-frequency fluctuations (fALFF) were selected as indicators for evaluating brain function. In accordance with the Anatomical Automatic Labeling template, the gray matter was divided into 116 regions. The mean ReHo value, mean ALFF value, and mean fALFF value of each brain region were extracted. 5-HTTLPR genotypes of all research objects were tested by polymerase chain reaction and agarose gel electrophoresis. Two-way analysis of covariance was used to investigate whether there is an interaction effect between SLE disease status and 5-HTTLPR genotype on resting-state brain function. Results In SLE patients with S/S homozygosity, there were notably lower mean ReHo, mean ALFF, and mean fALFF values observed in the right parietal, inferior angular gyrus, and the right paracentral lobule compared to healthy controls. However, this distinction was not evident among carriers of the L allele. Within the S/S genotype, SLE patients exhibited decreased mean ReHo in the left posterior cingulate gyrus, reduced mean fALFF in the left caudate nucleus, and diminished mean ALFF in the left temporal pole: superior temporal gyrus, in contrast to the HC group. Conversely, no such differences were discerned among carriers of the L allele. Notably, among L allele carriers, SLE patients displayed a higher mean ReHo value in the right hippocampus compared to the HC group, while demonstrating a lower mean ALFF value in the left medial and paracingulate gyrus in contrast to the HC group. Conversely, these differences were not apparent among S/S homozygotes. Conclusions Brain function in the right parietal and inferior angular gyrus and the right paracentral lobule is affected by the interaction effect of SLE disease status and 5-HTTLPR genotype. Background Systemic lupus erythematosus (SLE) is an autoimmune disease that causes production of many autoantibodies due to aberrant immune system activation, culminating in the numerous organ damage [1].One of the most typically damaged organs in SLE is the nervous system, and the accompanying central and peripheral nervous system dysfunction is known as neuropsychiatric systemic lupus erythematosus (NPSLE).According to published research, it affects between 12 and 95% of SLE patients [2].The vast diversity of presentations of NPSLE, from typical headaches, cognitive abnormalities, and mood disorders to unusual manifestations such as Guillain-Barre syndrome and autonomic dysfunction, is one of the obstacles doctors frequently confront in diagnosing and managing patients with NPSLE [3].In SLE patients, cerebral function is a more sensitive predictor of central nervous system damage, and abnormalities in cerebral function may be apparent before substantial neuropsychiatric symptoms occur. Resting-state functional magnetic resonance imaging (rs-fMRI) is a noninvasive imaging technique that uses a blood oxygenation level-dependent (BOLD) signal to investigate brain function in a variety of central nervous system (CNS) diseases such as Alzheimer's disease, Parkinson's disease, depression, schizophrenia, and others [4][5][6][7].Increasing amounts of study have used functional magnetic resonance imaging (fMRI)to investigate alterations in brain function in SLE patients and have revealed brain functional abnormalities even in non-NPSLE patients [8,9].rs-fMRI enables the reflection of brain function through multiple indicators.Primarily, it encompasses amplitude of low-frequency fluctuation (ALFF) and fractional amplitude of low-frequency fluctuations (fALFF), which effectively gauge the intensity of neuronal activity in specific localized brain regions [10]; ②Regional homogeneity (ReHo) is a measure of how well all neurons in a given area of the brain coordinate their spontaneous activity [11].ReHo values were found to be significantly lower in non-NPSLE patients' fusiform gyrus and thalamus than in healthy controls, whereas they were higher in non-NPSLE patients' parahippocampal and uncinate regions [9]. Despite studies demonstrating abnormal brain function in SLE patients, it is unclear what causes NPSLE.Thrombosis, autoantibodies, cytokines, and cell-mediated inflammation, as well as changes to the blood-brain barrier, are examples of potential pathogenic processes [2].Additional research should be done on the effects of genetic variables, which are important in the pathophysiology of SLE, on the structure and function of the brain.Imaging genetics, a type of genetic association analysis that combines imaging and genetics, can investigate the impact of genetic variation on brain structure and function, as well as the relationship between neuropsychiatric disease risk genes, brain activity, clinical manifestations, and so on.One of the more common imaging genetics research targets is the 5-hydroxytryptamine (5-HT) pathway-related genes.As a neurotransmitter, 5-HT is a critical signaling molecule in the CNS, controlling nearly every part of the nervous system from the brain stem cell body.It indicates that the 5-HT system can interact with the majority of neurochemical systems in the CNS [12].The 5-hydroxytryptamine transporter (5-HTT) is located on the presynaptic membrane of nerves and is responsible for regulating the levels of 5-HT by reabsorbing the 5-HT that has been released into the synaptic gap back to the presynaptic membrane.This mechanism establishes 5-HTT as a pivotal element in modulating synaptic transmission within 5-HTergic neurons.Its crucial role lies in the regulation of synaptic transmission within this neuronal network.Serotonin transporter gene-linked polymorphic region (5-HTTLPR) is an essential element of the 5-HT system gene polymorphism and is directly related to the control of 5-HTT gene expression.The L and S alleles of 5-HTTLPR are the most prevalent alleles.An imaging genetics research discovered that 5-HTTLPR L-allele carriers had substantially lower cortical volume in the right anterior midcingulate gyrus compared to S-allele homozygotes in a pooled sample of patients from the severe depressive disorder and healthy participants groups [13]. To the best of our knowledge, no studies have been conducted on the association between 5-HTTLPR and brain function indices in SLE patients.This work utilized rs-fMRI to evaluate the effect of 5-HTTLPR gene polymorphism and SLE disease status on resting-state brain function, providing preliminary information for SLE imaging genetics research.inpatient department of the Rheumatology and Immunology Department of the First Affiliated Hospital of Kunming Medical University.Therefore, the inclusion criteria were as follows: (1) the diagnosis of SLE is determined according to the Revised Criteria for the Classification of SLE developed by the American College of Rheumatology (ACR) in 1997 [14]; (2) being within the age range of 15 to 55 years old; (3) right handed; (4) voluntarily participating in the research and signing the informed consent. The exclusion criteria were as follows: (1) patients with a history of head trauma; (2) patients with a history of drug or alcohol dependence; (3) patients suffering from other connective tissue diseases, blood system diseases, cardiovascular and cerebrovascular diseases, malignant tumors, etc.; (4) patients with parenchymal brain disease, CNS infection, epilepsy, and other neuropsychiatric diseases not caused by SLE; (5) patients who have contraindications to MRI (such as pacemaker, metal implants in the body, and claustrophobia); (6) pregnant or breastfeeding women; (7) those with abnormal brain structure indicated by conventional T1-and T2-weighted MRI; (8) the head movement is obvious during the scanning process, which may affect the results of MRI data preprocessing (in the head movement parameters, the translation > 2.0 mm, the rotation > 2.0°). This study included healthy controls (HC) who were matched with the SLE group in terms of age, gender, and years of education. Testing for 5-HTTLPR (1) The DNA extraction process involved isolating DNA from 2 mL of peripheral venous blood, which was anticoagulated using 2% EDTA.This extraction was carried out following the guidelines provided by the TIANamp Blood DNA Kit DP348 from the supplier, Tiangen Biotechnology (Beijing) Co; (2) polymerase chain reaction (PCR) amplification of target genes: sense primer (5′-GGC GTT GCC GCT CTG AAT GC-3′) and antisense primer (5′-GAG GGA CTG AGC TGG ACA ACCAC-3′) were synthesized by Sangon Biotech Co., Ltd., Shanghai, China.0.8 µl sense primer (10uM), 0.8 µl antisense primer (10 µM), 10 µl 2 × GoldStar Best MasterMix (Dye) (Jiangsu Kangwei Century Biotechnology Co., Ltd.), 2 µl template DNA and 6.4 µl double distilled water were mixed in one PCR tube.PCR was started with predenaturation at 95 °C for 10 min, followed by 10 cycles including denaturation at 94 °C for 30 s, annealing at 65 °C for 30 s (1 °C drop per cycle), and extension at 72 °C for 1 min.Then 30 cycles were performed including denaturation at 94 °C for 30 s, annealing at 64.3 °C for 30 s, and extension at 72 °C for 1 min.The last step was the final extension at 72 °C for 5 min.(3) Genotype reading: PCR amplified products were separated by electrophoresis in 3% agarose gel mixed nucleic acid dyes (voltage 140 V, 40 min).Bands of the target gene were visualized under an ultraviolet gel imaging system, Gel DocEQ (Bio-Rad Laboratories, Inc., CA, USA).PCR-produced products of the pair of primers used in this study had two lengths; 484 bp and 528 bp.The band with a length of 484 bp was identified as the S/S genotype, and the band with a length of 528 bp was identified as the L/L genotype.Bands with lengths of 484 bp and 528 bp appearing at the same time were identified as L/S genotype.The electropherogram of 5-HTTLPR genotyping is shown in Fig. 1. Rs-fMRI acquisition The MRI scans for all subjects were conducted by a highly experienced radiologist, utilizing the same 1.The rs-fMRI data was preprocessed on a DPARSF software based on the MatlabR2016a platform.As per the Anatomical Automatic Labeling template from MNI, the gray matter in each subject's brain was segmented into 116 regions.Among these, the initial 90 regions corresponded to the brain's gray matter, while the remaining 26 regions represented the gray matter of the cerebellum.The rs-fMRI data processing toolkit REST V1.8 was used to extract the mean ReHo value, mean ALFF value, and mean fALFF value of each region. Demographics and psychological assessment The details of sex, age, weight, height, medical history, family medical background, personal history, and habitual hand use were recorded for both SLE patients and HC.Lupus disease activity was assessed using the SLE disease activity index-2000 version (SLEDAI-2000) [15].We used Hamilton Depression Scale (HAMD) [16] and Hamilton Anxiety Scale (HAMA) [17] to assess levels of depression and anxiety in SLE patients.Scores on the two scales were recorded and evaluated by two systematically trained physicians and achieved good inter-examiner reliability after systematic training. Statistical analysis Statistical software IBM SPSS Statistics 21 (IBM Inc.Armonk, NY, USA) was used for data analysis.Quantitative data following normal distribution were expressed as x±s, and t test was used for between-group comparison; otherwise, the quantitative data of non-normal distribution data was expressed as M (P25%, P75%), and the Mann-Whitney U test was used for between-group comparison of two groups.The chi-square test was chosen when analyzing qualitative data.When P < 0.05, the difference was considered to be statistically significant. The mean ReHo, mean ALFF, and mean fALFF values of each brain region were used as dependent variables, whereas diagnosis of lupus (SLE vs HC) and 5-HTTLPR genotype (L allele carriers vs S allele homozygous) were used as independent variables and age as a covariate for a two-way analysis of covariance to investigate an interaction between disease status and 5-HTTLPR genotype on resting-state brain function.If the strength of the interaction was statistically significant, post hoc analysis (Bonferroni method) was used to compare the differences between subgroups.When P < 0.05, the difference was considered to be statistically significant. Hardy-Weinberg genetic equilibrium test According to the inclusion and exclusion criteria established in this study, a total of 51 SLE patients without obvert neuropsychiatric manifestations and 44 healthy controls were included.In order to test population representativeness of the two groups, Hardy-Weinberg genetic equilibrium test was used.It showed that both groups are in line with the Hardy-Weinberg genetic equilibrium law (Table 1), indicating that both samples are representative of the population. General information, 5-HTTLPR genotype, and allele frequency The gender distribution, age demographics, and educational levels within both the SLE and HC groups displayed no statistically significant variations (Table 2).In the SLE cohort, 26 individuals carried the L allele (3 L/L, 23 L/S), while 25 individuals were S/S homozygotes, resulting in L allele and S allele frequencies of 28.43% and 71.57%, respectively.Within the HC group, there were 21 L allele carriers (4 L/L, 17 L/S), alongside 23 S/S homozygotes, reflecting L allele and S allele frequencies of 28.41% and 71.59%, respectively.Notably, there were no statistically significant discrepancies observed in the 5-HTTLPR genotype or allele frequencies between the SLE and HC groups (Table 2).These findings indicate a congruence in the genetic backgrounds of the 5-HTTLPR gene in both cohorts, suggesting comparability in brain functionality.3). Gray matter function in SLE patients with different 5-HTTLPR genotypes The mean ReHo value, mean ALFF value, and mean fALFF value of each gray matter area in the L allele carrier group and the S/S group in SLE patients were compared.The results showed that in SLE patients, the ReHo values of the right parietal and inferior angular gyrus, right hippocampus, and right posterior cingulate gyrus of the S/S group were lower than those of the L allele carriers; the ALFF values of the bilateral hippocampus were lower than those of the L allele carriers; the fALFF values of the right parietal and inferior angular gyrus, bilateral hippocampus, left posterior cingulate gyrus, and right lingual gyrus were lower than those of the L allele carriers (Table 4, Fig. 2). Interaction between 5-HTTLPR gene and SLE disease state on brain function The outcomes indicated that the mean ReHo, ALFF, and fALFF values within individual cerebellar gray matter regions remained unaffected by the interplay between SLE disease status and the 5-HTTLPR genotype.Among 90 Gy matter regions, the mean ReHo, mean ALFF, and mean fALFF values of the right parietal inferior angular gyrus and the right paracentral lobule were all affected by the interplay between SLE disease status and 5-HTTLPR genotype (Table 5, Fig. 3).Post hoc test analysis found that in the S/S genotype, the mean ReHo, mean ALFF, and mean fALFF values of the right parietal and inferior angular gyrus and right paracentral lobule of SLE patients were lower than those of the HC group, while such difference was not found among L allele carriers. In addition, the mean ReHo value of the left posterior cingulate gyrus, the mean fALFF value of the left caudate nucleus, and the mean ALFF value of the left temporal pole: superior temporal gyrus were all affected by the interaction effect of SLE disease status and 5-HTTLPR genotype (Table 5, Fig. 3).Post hoc analysis found that in the S/S genotype, the mean ReHo value of the left posterior cingulate gyrus, the mean fALFF value of the left caudate nucleus, and the mean ALFF value of the left temporal pole: superior temporal gyrus of the SLE patients were lower than those of the HC group, while such difference was not found among L allele carriers.The mean ReHo values in the right hippocampus and the mean ALFF values in the left medial and paracingulate gyrus were all affected by the interaction between SLE disease status and the 5-HTTLPR genotype (Table 5, Fig. 3).Post hoc analysis found that among the L allele carriers, the mean ReHo value of the right hippocampus of SLE patients was higher than that of the HC group, and the mean ALFF value of the left medial and paracingulate gyrus was lower than that of the HC group, while such difference was not found among the S/S homozygotes. Discussion There have been comparatively few studies investigating 5-HTTLPR in the context of SLE.In our study, there were no significant differences observed in the genotype and allele frequency of 5-HTTLPR between the SLE and HC groups.This aligns with the research findings reported by Li et al. [18].This suggests that there isn't a direct association between genetic variations in 5-HTTLPR and susceptibility to SLE.Furthermore, our investigation found no correlation between genetic variations in 5-HTTLPR, SLE disease activity, depression, or anxiety.Xu et al. revealed that in their study, the average HAMD score among S/S homozygotes in SLE patients was higher than that among L allele carriers [19].Additionally, within the SLE group, individuals experiencing depression exhibited a higher frequency of the S allele and S/S genotype compared to those without depression [19].Despite these disparities with our study's findings, the limited sample size in our investigation suggests the need for a re-evaluation of any conclusions.Confirmatory support through a larger follow-up study is essential.The observed variance in gray matter functionality between L allele carriers and S/S homozygotes among SLE patients was primarily identified in the right inferior parietal angular gyrus and bilateral hippocampus.The S/S group exhibited poorer brain function indices across all affected brain areas compared to the L allele-carrying group.Our hypothesis attributes these findings to the variance in content and concentration of 5-HT within the synaptic cleft between S/S homozygotes and L allele carriers, given the S allele's association with reduced 5-HTT gene transcription and the L allele's association with increased 5-HTT gene transcription.The interaction between SLE illness severity and the 5-HTTLPR genotype resulted in alterations only in the resting-state brain function markers of specific gray matter brain regions.This interaction notably influenced the mean ReHo, mean ALFF, and mean fALFF values in the right parietal and inferior angular gyrus, along with the right paracentral lobule.This interaction effect highlighted that in individuals with the S/S genotype, the mean ReHo, mean ALFF, and mean fALFF values in the right parietal and inferior angular gyrus, as well as the right paracentral lobule, were lower among SLE patients compared to healthy controls.However, no such discrepancy was observed in L allele carriers.The functional roles of these regions are notable-the parieto-inferior angular gyrus primarily contributes to self-perception, executive functioning, and the integration of emotional and sensory information from facial stimuli [20].On the other hand, the paracentral lobule governs motor and sensory innervation of the contralateral lower extremity [21], and it plays a role in motor performance, pain processing, and perception [22].Abnormal paracentral lobular activity has been noted in individuals experiencing chronic pain, impacting sensory pain recognition or assessment [23].Importantly, there is a lack of research exploring the relationship between the parietal and inferior angular gyrus, paracentral lobules, and 5-HTTLPR gene polymorphisms.This study highlighted the influence of the interplay between SLE disease state and 5-HTTLPR genotype on the three indices of brain function specifically in the right inferior parietal angular gyrus and the right paracentral lobule.It suggests that the 5-HTTLPR genotype modulates the impact of SLE disease on the brain function of these regions-specifically, individuals with the S/S genotype demonstrate a more pronounced effect of SLE disease on the brain function of these areas. Moreover, the interplay between the SLE disease state and the 5-HTTLPR genotype influenced the mean ReHo value of the left posterior cingulate gyrus, the mean fALFF value of the left caudate nucleus, and the mean ALFF value of the left temporal pole: superior temporal gyrus.This interaction indicated that in individuals with the S/S genotype, the mean ReHo, fALFF, and ALFF values of these regions among SLE patients were lower compared to those of the HC group.Conversely, such differences were not observed among L allele carriers.Research has previously highlighted the significance of these brain regions.The posterior cingulate gyrus, known for its high metabolic activity in the resting state, is associated with self-evaluation [24], attention [25], formation of self-thoughts [26], and episodic memory, among other cognitive functions [27].Similarly, the caudate nucleus, a component of the striatum, plays a crucial role in brain activity, particularly in cognitive function.Our findings indicate that the impact of SLE disease on brain function in the left posterior cingulate gyrus, left caudate nucleus, left temporal pole, and superior temporal gyrus was modulated by the 5-HTTLPR genotype.Specifically, individuals with the S/S genotype demonstrated a more pronounced effect of SLE disease on brain function in these regions. Our investigation further demonstrated that the interaction between SLE disease status and the 5-HTTLPR genotype affected the mean ReHo values in the right hippocampus and the mean ALFF values in the left medial and paracingulate gyrus.This interaction implies that among SLE patients, those carrying the L allele exhibit heightened mean ReHo values in the right hippocampus and reduced mean ALFF values in the left medial and paracentral cingulate gyrus compared to healthy controls.Conversely, individuals with the S/S genotype do not display such distinctions.The limbic system, encompassing the hippocampus and cingulate gyrus, plays a pivotal role in emotion regulation and cognitive function.Notably, depressive, anxious, and cognitive symptoms are prevalent among SLE patients, potentially attributed to the functional impairment of adjacent brain regions caused by the disease.Our study highlighted that the 5-HTTLPR genotype of the subjects influenced the impact of SLE disease on the brain function indices of the hippocampus and cingulate gyrus.Specifically, individuals with the L/S or L/L genotype demonstrated a more pronounced effect of SLE disease on the brain function of these regions. Conclusions In conclusion, our study sheds light on the intricate relationship between SLE, the 5-HTTLPR genotype, and brain function.By uncovering specific brain regions affected by this interaction, we highlight the potential influence of genetic variations on neurological manifestations in SLE patients.However, our findings are tempered by limitations such as sample size constraints, cross-sectional design, and the exclusive focus on the 5-HTTLPR genotype.Future research endeavors should encompass larger, longitudinal studies integrating diverse genetic and environmental factors.Addressing these limitations could provide a more comprehensive understanding of how genetic variations impact brain function in the context of SLE, ultimately guiding more targeted interventions and therapeutic approaches for affected individuals. participants provided their written informed consent to participate in this study. Consent for publication Consent was provided for the publication of magnetic resonance imaging. Competing interests All study authors declare that they have no competing interests. 5 T MRI scanner manufactured by General Electric (Twinspeed; GE Medical Systems, Milwaukee, WI, USA).Initially, plain scanning including routine T1-weighted image and T2-weighted image was conducted to rule out any intracranial organic lesions.An echo planner imaging sequence scan was used with the following parameters: repetition time = 2000 ms, echo time = 40 ms, thickness = 5 mm with an interslice gap of 1 mm, field of view = 240 mm × 240 mm, matrix size = 64 × 64, flip angle = 90°, number of excitation = 2.00, number of layers = 24, time point = 160.The total fMRI scan time was 320 s. Fig. 1 5 - Fig. 1 5-HTTLPR genotyping electropherogram.The identification of a sole band at 484 bp signified the S/S genotype, while a solitary band at 528 bp denoted the L/L genotype.The concurrent presence of bands at both 484 bp and 528 bp indicated the L/S genotype Table 1 Hardy-Weinberg genetic equilibrium test SLE Systemic lupus erythematosus, HC Healthy control Table 3 Clinical parameters in SLE patients with different 5-HTTLPR genotypes n, number of cases a Continuity correction chi-square test b Independent samples t-test c Kruskal-Wallis test d 2 × 2 chi-square test L/L + L/S (n = Table 4 Gray matter function in SLE patients with different 5-HTTLPR genotypes L Left, R Right, n number of cases, T Test statistic for independent samples t-test, Z test statistic for Mann-Whitney U test Table 5 Interaction between 5-HTTLPR gene and SLE disease state on brain function L Left, R Right, AAL, Anatomical Automatic Labeling, F-value, test statistic for analysis of covariance
2024-01-31T14:15:11.105Z
2024-01-31T00:00:00.000
{ "year": 2024, "sha1": "6e31106e6e0cc9fa202177f1993f2726d0f8c94e", "oa_license": "CCBY", "oa_url": "https://arthritis-research.biomedcentral.com/counter/pdf/10.1186/s13075-024-03276-y", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0ca7910421ed9931ebb63316f1bbf18fed356a08", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
54491068
pes2o/s2orc
v3-fos-license
The erosion of African communal values: a reappraisal of the African Ubuntu philosophy The paper, which exploits conceptual analysis techniques, interrogates an African notion of a ‘community’ as embodied in the ideas of ‘Umntu ngumntu ngabantu.’ The problem the article seeks to address is the erosion of community values. The study intends to explore the question: How can we retrieve the communal cultural values of tolerance, humanity, respect and some of common elements of our cultural treasures of Ubuntu that African communities used to be proud of? Using the philosophy of Ubuntu as a hermeneutic key, I argue that any member of a community whose personal life is guided by Ubuntu could be said to have embraced the core humanistic attributes of Ubuntu. These are being caring, humble, thoughtful, considerate, understanding, wise, generous, hospitable, socially mature, socially sensitive, virtuous, and blessed: character attributes that veer away from confrontation towards conciliation. The paper is based on a small scale survey, which exploited an open ended questionnaire in its data collection. Data revealed that despite major constraints such as poverty and scarcity of resources, crime, substance abuse and many others, family members are still willing to help and support each other. Finally, the study suggests that the values of Ubuntu, if consciously harnessed, can play a major unifying role in the process of harmonising the South African/African nation(s). Introduction embedded in a context of social relationships and interdependence, and never as an isolated, atomistic individual. In African community, people view themselves and what they do as equally good to others as to themselves. Traditional African cultural practices are performed by members, clans and relatives in their socio-ethical settings. Kwasi Wiredu (in Waghid et al. 2005) also observes that African traditional cultures are characteristically communalistic (or, if you like, communitarian). Wirendu further asserts that in the African context, communitarianism refers to a culture that is a form of life, exemplifying a certain conception of the role and significance of the community in the life of individuals in society. Gyekye (2002) notes that the communal or communitarian attributes of African socio-ethical thought are reflected in the communitarian features of the social structures of African society. This view is taken further by Waghid et al. (2005), who comment on Geykye's definition of community. Waghid et al. (2005) postulate that Geykye sees community not as a mere association of individual persons, whose interests and ends are contingently congruent but as a group of persons linked by interpersonal, biological and/or non-biological bonds. Geykye concludes that the members of a community also consider themselves primarily as members of the group that have common interests, goals, and values. According to Mogobe Ramose (2002b), the notion of remaining in touch in a community is not merely a sociological perception but a moral viewpoint. A good example one can think of is how things were (and still are) done in indigenous African settings in which people come together whenever problems arise, where ideas are shared, solutions are sought and found by all community members in a given real-life situation. The social supportive measures listed above are carried out so as to promote peace, love, respect, and working together in social harmony. However, community members no longer trust each other. Lack of discipline, violence, crime and aggressive behaviour in society become the accepted facts of life. Regardless of fluctuations in rates of incidence and categories, the erosion of traditional codes of humanism continues to create an ongoing challenge to African communities. For instance, the emergence of a new anti-traditional African phenomenon of xenophobic incidents among Africans has compounded the fast erosion of efficacy of Ubuntu. Statistics of cases of xenophobic violence are repeatedly reported by the South African media and researchers. These are not the only destructive factors that undermine African traditional values that promote harmony and a sense of pride in African cultural heritage. The study conducted during the period 2004-2008 reveals that school children from different kinds of communities show very little respect for their principals in schools, parents, educators, elders, and friends. Many of them fail to meet commitments in school work; they do not keep time and promises, and fail to do their homework. Besides the above negative behavioural attitudes displayed by school children towards authority, they also carry knives and guns in school and in communities. The prevailing worsening deterioration of Africa's social fabrics outlined above is endorsed by Waliggo's (2005) study. Waliggo (2005: 9) observes that public morality is very weak in Africa. For him the contemporary African society does not take the current economic immorality and crimes which involve fraud, embezzlement of public funds, corruption and abuse of office seriously. Instead of exposing crimes committed against the state, Waliggo argues, many people tend to praise and protect their own relatives and friends who perpetrate economic crimes against the state to uplift their own areas and to enrich their people. Besides the concealment of crimes and corrupt practices against the state by family members, recent reports in media further indicate a surge in shootings, stabbings, rapes and robberies. The South African media release statistics of cases of violence and crime committed in communities on a daily basis. In short, these incidences of violence, like violence in society, seem to run in cycles. These cycles appear to mirror the trends of violence in larger society and communities. In addition to incidences of violence, community members have now developed an individualistic philosophy, which.tends to run counter to many traditional values. Khoza (in Roux & Coetzee 1994: 3) defines individualism as "that political and social philosophy that places high value on the freedom of the individual and generally stresses the self-directed, selfcontained and comparatively unrestrained individual or ego". This paper poses the question: What could help bring back communal cultural values of tolerance, humanness, respect and some of the common priceless features of Ubuntu that the African communities used to be proud of? I am conscious of the fact that the societal culture of tolerance does not develop in a vacuum. However, African communities used to and still advocate the advancement of Ubuntu. By writing this paper, I wish to promote the value system embodied in Ubuntu and hope that everyone who reads this paper revisits the Ubuntu concepts during these turbulent times in the history of our nation. This paper is aimed at examining the following objectives: • To investigate the African philosophy of Ubuntu • To explore the embedded conceptual features of Ubuntu • To highlight the South African perspective of Ubuntu and its relevance to South African communities • To unravel problems that hamper the implementation of Ubuntu and • To suggest the way forward for African communities How the Ubuntu philosophy shapes and informs African cultural values is the next preoccupation of this paper. The philosophy of Ubuntu Ubuntu is difficult to define and a plethora of definitions, each emphasising different elements of the concept, exist. According to Roux & Coetzee (1994: 135), scholars such as Reuel Khoza, E.N Chikanda, Joe Teffo, Nono Makhundu, Sisho Maphisa and Augustine Shutte all attempt to define Ubuntu. According to Battle (1996: 99) the concept ubuntu originates from the Xhosa expression 'Umuntu ngumntu ngabanye abantu', which means that each individual's humanity is ideally expressed in relationship with others. Ubuntu consists of the prefix ubu-and the stem ntu-ubu evokes the idea of being in general. Thus, ubu-ntu is the fundamental ontological and epistemological category in the African thought of the Bantu-speaking people. Khoza (in Roux & Coetzee 1994) describes Ubuntu as an African view of life -an African world-view. He argues that the distinctive collective consciousness of Africans is manifested in their behaviour patterns, expressions and spiritual selffulfilment in which values such as the universal brotherhood of Africans, sharing and treating other people as humans, are concretised. His basic idea of universal brotherhood is echoed by other African thinkers in ideas such as sensitivity to the needs and wants of others, the understanding of others' frames of reference and man as a social being. In his recent groundbreaking work Let Africa Lead, Khoza (2005:269) defines Ubuntu as "an African value system that means humanness or being human, a worldview characterised by such values as caring, sharing, compassion, communalism, communocracy and related predispositions." Khoza adds that "Although it [Ubuntu] is culturally African in origin, the philosophy can have universal application." The collective consciousness 3 advocated by Ubuntu thinkers involves notions such as universal brotherhood and sharing which for Mbigi means "participation". From this view Mbigi (1997) develops a network of concepts such as "group solidarity", "compassion", "respect", "dignity", and "collective unity" to convey his idea of Ubuntu. This has been stated clearly in Khoza's view of "collective consciousness", which involves universal brotherhood, sharing and treating other people with respect. The sharing characteristic is very important for most Ubuntu philosophers -an attribute, which is also Mbigi's starting point in developing his views on Ubuntu. Roux and Coetzee (1994) assert that Mbigi bases his model on four principles which he derives from the Ubuntu view of life: • Morality which involves trust and credibility. • Interdependence which concerns the sharing and caring aspect that is co-operation and participation. • Spirit of man which refers to human dignity and mutual respect that insists that human activity should be persondriven and humanness should be central, and lastly • Totality, which pertains to continuous improvement of everything by every member. In the context of the African spirit of Ubuntu, Khoza (2005:xxi) observes that Ubuntu "constitutes the spiritual cradle of African religion and culture [and] finds expression in virtually all walks of life -social, political and economic." In this sense, the African spirit of Ubuntu should then be regarded as one of the origins of the development of a human rights culture in South Africa and the rest of African continent. The philosophy of Ubuntu espouses a fundamental respect in the rights of others, as well as a deep allegiance to the collective identity. More importantly, Ubuntu regulates the exercise of individual rights by emphasising sharing and co-responsibility and the mutual enjoyment of rights by all. It also promotes good human relationships and enhances human value, trust and dignity. The most outstanding positive impact of Ubuntu on the community is the value it puts on life and human dignity, particularly its caring attitude towards the elderly, who played and continue to play an important communal role in consolidating Ubuntu values. African societies place a high value on human worth, but it was humanism that found expression in a communal context rather than individualism (Teffo 1998: 3). According to Mbiti (1970: 108), "Whatever happens to the individual happens to the whole group and whatever happens to the whole group happens to the individual". This leads to social harmony and cohesion starting at the family and cultural community, circling out to the global community (Le Roux 2000). By perceiving the individual as being in the centre of this greater whole, the philosophy of Ubuntu may perhaps be described as African humanism. Characteristic features of Ubuntu (a) Humanity The tenets of humanity/humanism shape and inform the African fight for emancipation from colonialism. The question that needs to be interrogated is what is the relationship between Ubuntu and humanity/humanism? Firstly, Letseka (2000) identifies the notion of Botho or Ubuntu (humanism) as pervasive and fundamental to African socio-ethical thought. Perceived as an important measure of human well-being or humanness flourishing in traditional African life, Ubuntu/ humanism illuminates the communal rootedness and interdependence of persons, and highlights the importance of human relationships. Letseka treats Botho or Ubuntu as normative because it encapsulates moral norms and virtues such as kindness, generosity, compassion, benevolence, courtesy, and respect and concerns for others (Letseka 2000). According to Letseka, a person has a duty to give the same respect, dignity, value and acceptance to each member of the community. Moreover, the person who lives in accordance with the principles of Ubuntu in a community is said to embrace its major tenets. The core Ubuntu characteristics are being caring, humble, thoughtful, considerate, understanding, wise, generous, hospitable, socially mature, socially sensitive, virtuous and blessed -enduring humanistic attributes that move away from confrontation towards conciliation. The above qualities are the essential ingredients that enable people to work together. Secondly, Ramose (2002a) argues that Ubuntu can be understood as being human (humanness). Key concepts used to describe Ubuntu are forgiveness, recognition, humaneness, being respectful and being polite. Ramose believes that interdependence, collective consciousness and a communalist worldview are of the utmost importance in the African way of life. For Ramose, community ethos requires tolerance, understanding and respect towards all individuals in interpersonal relationships, in relations between the individual and the group of which he/she forms part, between groups, between such groups and larger communities which are the component forces between different communities: interconnected social strata that eventually encompass all ties of humanity. In this context, Ubuntuism may thus be observed at its most basic level in individual interactions and in the operation of small groups (such as families): an interaction that reflects a view of humanity generally. The communal practice of black people is almost, but not all inclusively, similar to the universal notion. Thirdly, Ubuntu has been called African humanism because it emphasises the value of human dignity irrespective of a person's usefulness. It expresses the idea that a person's life is meaningful only if he or she lives in harmony with other people because an African person is an integral part of society. For Chikanda as mentioned by Roux and Coetzee (1994), Ubuntu, which she sees as African humanism, involves alms-giving, sympathy, caring, sensitivity to the needs of others, respect, consideration, patience and kindness. Developing human potential requires, according to Nono Makhundu in Roux and Coetzee (1994), traits such as warmth, empathy, reciprocity, harmony, co-operation and a shared world-view, which make up the Ubuntu culture. Its spirit emphasises respect for human dignity, marking a shift from confrontation to conciliation. Fourthly, humanity in this view is seen as a characteristic of the whole species because humanity signifies different elements of the human species. African humankind constitutes one family. Thus, one gains humanity by entering into this relationship with other members of the family. This means that to be human is to affirm one's humanity by recognising the humanity of others and, on that basis, establishing human respect with them. It is therefore logical to contend that to denigrate and disrespect other human beings is (in the first place) to denigrate and disrespect oneself, if it is accepted that oneself is a subject worthy of dignity and respect. A person's (own) humanity is seen to be a gift. These are some of the values we grew up with as Africans in our community. The individual has a social commitment to share his/her relationship with others. The individual experiences that could be shared with others include his/her record in terms of kindness and good character, generosity, hard work, discipline, honour and respect, and living in harmony (Teffo 1998). The humanness referred to here finds expression in a communal context. However, while the values that the word Ubuntu encapsulates are not themselves unique to African thought, their significance for a society is arguably much more pronounced in African communities. The interdependence of community members in turn leads to the recognition that individuals not only exercise their rights communally, but also have certain duties towards the community as a whole as well as towards other individual members. According to Venter (2004), Ubuntu is a concrete manifestation of the interconnectedness of human beings and the embodiment of African culture and life style. (b) Tolerance Tolerance is a value to be achieved by deepening people's understanding of the origins, evolution and achievements of humanity on the one hand and through the exploration of that which is common and diverse in cultural heritage on the other. Disagreements need not cause harm if there is tolerance and respect for the other's viewpoint in the community structure. Tolerance is the idea that one must not disregard other people's points of view (not even about important moral issues). In addition, the value of tolerance has become even more important now that we live side by side with people who are very different from us. If a society is not tolerant, then there cannot be true freedom. Highlighting the importance of tolerance does not suggest that it is the only value that community members should live by, or even that it is the most important. However, when communities make moral evaluations, it is important that these evaluations evolve from a continuous discussion and debate between various role-players in the community. Another important characteristic that nourishes the community structure is that of respecting each other. (c) Respect In the great context of ideas that best symbolises enlightened humanity, respect, in addition to intelligence and tolerance, is probably the most essential quality, more especially for people working together for a common cause. As a value, 'respect' is not explicitly defined in the Constitution but it is implicit in the way the Bill of Rights governs not just the state's relationship with citizens but citizens' relationships with each other. How can I respect you if you do not respect me? Respect is an essential precondition for communication, for teamwork and for productivity: a vital tool for addressing communal problems. An institution cannot function if mutual respect does not shape and inform its activities. In some of the most important international declarations that South Africa has ratified, and which are therefore legally binding on our country, South Africans have committed themselves to the values of respect and responsibility. The Universal Declaration of Human Rights also states that education shall be directed to the full development of the human personality and to the strengthening of respect for human rights and fundamental freedoms. South African perspective of Ubuntu In the South African context, Ubuntu is seen as a notion with particular resonance in the building of democracy. According to Le Grange (in Waghid et al. 2005: 131) the term Ubuntu has gained prominence in post-Apartheid South Africa. Justice Mokgoro likens Ubuntu to the English word "humanity" and the Afrikaans word "menswaardigheid", and argues that it embraces both section 9 (the right to life), and section 15 (the right to human dignity) of the Constitution's Bill of Rights. The Bill of Rights was born out of a long struggle against colonial oppression and apartheid. In Makwanyane (1995), Justice Sachs said that the concept of Ubuntu should be invoked when the Bill of Rights is applied to restore dignity to ideas and values that have long been suppressed or marginalised. Born out of the African spirit of Ubuntu, the Bill of Rights is part of the African Renaissance, the rebirth of those African/South African values which have been suppressed or marginalised by colonial powers and institutions. The Bill of Rights should therefore be seen as the attempt to give expression to the values associated with Ubuntu. In the 1990s, Ubuntu received recognition, from the Interim Constitution and the post-preamble to the interim Constitution (1993) includes the following lines: "The adoption of this constitution lays the secure foundation for the people of South Africa and enables them to transcend the divisions and strife of the past, which generated gross violations of human rights, the transgression of humanitarian principles into violent conflicts and a legacy of hatred, fear, guilt and revenge". In the postscript to the interim constitution for example, Ubuntu is explicitly mentioned as being the source of the underlying values of the new South Africa. It (Ubuntu) is listed along with the constitution, human rights and a legacy of hatred. In this formulation, Ubuntu is aligned into positive values of understanding and reparation, and contrasted with vengeance, retaliation, and victimisation. Justice Mokgoro cited by Makwanyane refers to Ubuntu as one shared value that runs like a golden thread across cultural lines and then proceeds to the following definition. Generally, ubuntu translates as humaneness. In its most fundamental sense it translates as personhood and morality. Metaphorically, it expresses itself in Umntu ngumntu ngabantu (a person is a person because of other people), describing the significance of group solidarity on survival issues so central to survival of communities. While it envelops the key values of group solidarity, compassion, respect, human dignity, conformity to the basic norms and collective unity, in its fundamental sense it denotes humanity and morality (Makwanyane 1995: 308). The source of indigenous values to which the passage is referring is the concept of Ubuntu. In South Africa, Ubuntu has become a notion with particular resonance in the building of a democracy. In part, its prominence might be understood as an attempt to re-discover African cultural values eroded by both colonialism and apartheid. Although Ubuntu is part of the South African "rainbow heritage" and the society is multi-ethnic and multicultural, the Ubuntu philosophy might have operated and still be operating differently in diverse community settings of the "rainbow" nation. Another South African perspective on Ubuntu is provided by Sebidi, who warns that the collective values of Ubuntu cannot be compromised. For him, Ubuntu is more than just an attribute of individual human acts that build the Discussions The survey revealed that the participants perceived themselves to have a very good understanding of what Ubuntu is. This was shown by a huge percentage of people indicating their good understanding of Ubuntu and its values. Furthermore, it revealed that Ubuntu is still practised although not on a large scale by most communities and moreover, they are still willing to embrace values of Ubuntu. However, the survey also provided a long list of hindrances/obtacles that prevent people from practising cultural activities in their communities as listed below: • Poverty and lack of resources • Crime and substance abuse • Lack of trust among community members • Foreign cultures and religious beliefs • Lack of knowledge and motivation Conclusions and recommendations The paper draws on Ubuntu as its theoretical framework. In the context of theoretical structuring, we argue for communitarian principles in a philosophical position that defines a person in terms of social bonds and cultural traditions, rather than through individual traits. The paper contends that traditional African cultural practices are performed by members, clans and relatives in their socio-ethical settings. Examples have been highlighted and supported -to some extent -by research data on how things were (and still are) done in indigenous African settings in which people come together whenever problems arise, where ideas are shared, solutions are sought and found by all community members in a given real-life situation. To re-affirm the constraints that confront strategies aimed at reviving and rekindling the spirit of Ubuntu, the obstacles that hinder cultural practices and communal values such as xenophobia, moral degeneration, crime, religion and individualism have been highlighted. Finally, the paper argues for the restoration of the philosophy of Ubuntu as it encapsulates moral norms and virtues such as kindness, generosity, compassion, benevolence, courtesy, and respect and concerns for others. This author suggests the retrieval of Ubuntu, knowing fully well that African culture, just like all other cultures, has never been static. It is recognised that cultures grow with time. Hence, I do propose that people have to re-adjust to meet new challenges and demands brought about by modernisation. This paper therefore, recommends the following measures towards the retrieval of Ubuntu: • The revival of cultural activities. In some communities this is termed Ibuyambo (a Xhosa name for revival). • The restoration of African culture. African culture is taken to mean the sum total of the ways in which a society preserves, identifies, organises, sustains and express itself. • The restoration of a shared sense of morality. Within the African societies, there is a shared sense of morality that is similar in many aspects and based on the key concept of Ubuntu. • The restoration of public morality. In African communities public morality regulates the behaviour and values of both the community and the individual who lives and achieves his or her full humanness within the community. • The restoration of Ubuntu. Ubuntu includes the values of greeting everyone, sharing, generosity, hospitality, good manners, respect and protecting one's dignity and others' human dignity. Although I do recommend the restoration of these cultural values, I am conscious of the fact that communities can never practise culture way it used to be practised in the past and culture and traditions are not static but change with time.
2018-12-04T11:33:05.974Z
2011-01-01T00:00:00.000
{ "year": 2011, "sha1": "070abaa14715134d8f95d3e7add578b389e8e374", "oa_license": null, "oa_url": "https://www.ajol.info/index.php/ijhss/article/download/69506/57525", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "db199f7f600edeea8dffe9356462d62780408b6d", "s2fieldsofstudy": [ "Philosophy", "Sociology" ], "extfieldsofstudy": [ "Sociology" ] }
235009597
pes2o/s2orc
v3-fos-license
Research on the influence mechanism of China's industrial circulation on long-term interest rate: Empirical analysis based on VEC model . As the adjustment space of China's monetary policy is gradually expanding and the adjustment intensity is gradually increasing, the influence of external factors on money supply and demand is gradually weakening. The endogenous mechanism of interest rate in the real economy needs to be further explored. Through the long-term interest rate model, the paper reveals the relationship between the circulation status of real economy and long-term interest rate. Based on the monthly data of China from 2003 to 2019, the paper establishes the error-correction VEC model and the state-space model to conduct empirical test and analysis on the influence mechanism of long-term interest rate. The final results show that exogenous factors such as monetary policy have certain influence on interest rate in the short run, while in the long run, interest rate is affected by the average circulation of goods described by the inventory increment of manufacturers and the actual production input of enterprises. Literature review Money supply and demand determine interest rate, and the endogenous and external nature of interest rate essentially depends on the endogenous and external nature of money supply and demand. Mishkin (1995) believes that monetary policy can respond to national demand quickly, only if it is quickly transmitted to the market interest rate. It can be seen that the decision of interest rate actually depends on the supply and demand of money. Therefore, the decision of interest rate is actually determined by the supply and demand of money. From the perspective of macro economy, the discussion on the externality of interest rate is mostly based on the transaction function of money 1 . Liang Dongli (2009), from the perspective of central bank, believes that the externality of money supply mainly depends on two levels. The first level is whether the central bank can control the base money when the money multiplier is unchanged. Whether the central bank can control the money multiplier while the base money remains constant. The second level is whether the monetary policy of the central bank can effectively change the interest rate, so as to promote the change of effective demand, and then meet the demand for transaction money 2 . Zou Wenli (2019) believes that with the influence of market environment, the exogenicity of money supply is gradually highlighted, monetary policy is gradually changing from quantitative tool to price tool, and the central bank's adjustment of interest rate can be divided into direct influence mechanism and indirect influence mechanism. The direct influence mechanism will directly change the short-term interest rate, and the indirect influence mechanism will affect the medium -and long-term interest rate, lead to the change of total output, and ultimately affect the interest rate level 3 . Ma fang fang (2009) from the assets of preference of the social public endogenous research interest rates, in the money supply endogenous theory framework, by choosing preferences reflect the money demand and the social public assets variables, the basis of the conclusion, the paper found the issue with the current money supply has significant positive correlation, gross domestic product (GDP) and the consumer price index, and this positive correlation between money supply and cash leakage rate and the current money supply is negatively related. He believes that the endogenous transmission process of interest rate with money as the main body is: central bank -commercial banksenterprises and residents 4 . From the perspective of interest rate marketization, Huang Jing (2020) proposed that when the interest rate is not completely endogenous, China should actively encourage bank credit to play the role of financing, state-owned banks should actively support credit in key areas, and effectively weigh the impact of macroeconomic factors under the consideration of a certain limit of free floating interest rate, so as to better promote growth and curb inflation 5 . In the evolution of modern economics, most scholars have studied the internal and exogenous causes of interest rates from the perspective of money supply. Zhou Liping (2013) took Keynesianism as the cut-off point and discussed the endogeneity and exophyity of money supply in different periods, so as to analyze its influence on endogeneity and exophyity of interest rate. The debate before Keynesianism focused on the nature of money and the determination of its value, which was the foundation of the debate between endogenous and exogenous money supply in modern times. The debate after Keynesianism mainly focused on the endogeneity of interest rate from the perspective of money supply. Keynesian theory of money supply endogenous including levels after socialist theory, structuralism, circulation, they think money demand in the financial market determines the central bank, the money supply, Davidson, from the perspective of the portfolio, that money supply is endogenous, thinks that there are endogenous financing needs, money is in the nature of capital, has the function of production, so the money supply is based on the real economy needs, is an endogenous, after but the Keynesian monetary demand into the internal and external influence interest by nature 6 . Yuan Hui (2021), in the horizontal theory's reconstruction of Keynesian monetary thought, proposed that the supply of credit money comes from the financing demand of economic subjects, and interest rate is the exogenous variable set by the monetary authority and the main monetary policy too 7 . Gracjan Robert Bachurewicz (2019) from the perspective of the money supply to the same after keynesianism to explore, in Poland, for example, using the method of granger causality test based on the analysis over the past 15 years the polish money supply endogenous problems, found that bank credit demand will also result in changes of the monetary base, lead to changes in bank deposits and money supply 8 . From the perspective of the dual determination of interest rate, Zeng Xianjiu (2001) believed that interest rate has the dual nature of endogeneity and exogenity, and made a dialectical analysis of the relationship between the two. He believed that endogeneity of interest rate meant that interest rate was the price of capital and a reward for lending capitalists to transfer the right to use capital, while interest was a derivative form of interest rate and represented surplus value. Marx believed that interest rate was determined by profit, capital supply and demand, production cycle and other factors. In Das Kapital, Marx comprehensively demonstrated the decisive role of interest rate in economy. While considering the role of the two, the dual attributes are not equally distributed in the same interest rate body, but each interest rate has some main attributes. For example, when the central bank sets the official interest rate, it is exogenous, but the endogeneity of the interest rate should be taken into account when making the official interest rate, and the real economic situation should be considered. Therefore, the completely exogenous or completely endogenous interest rate does nothing. Therefore, the endogeneity and exogenity of the interest rate are dialectically unified 9 . Li Jianjian (2008) believes that interest rate is a dialectical unity of endogeneity and exophyity. The endogeneity of interest rate lies in that the interest rate is determined by both supply and demand in the market, which fully reflects the market situation. The exogenous nature of interest rates is that monetary authorities cause interest rates to be influenced by external forces. Only the full combination of the two can promote economic development together 10 . Tu Chuanjun (2011) discussed the theory of internal and external generation of money supply. In his book Monetary Theory, Lauren Harris proposed that money supply is regulated by the central bank in order to stabilize employment and ensure output according to changes in the demand for transacted money caused by changes in income level. The money supply may also be caused by changes outside the economic system, which should be studied by experience. The internal and external symbiosis of interest rate is mainly affected by multiple factors outside the money supply system 11 . Before the emergence of endogenous money theory, the research on the exogenicity of interest rate has experienced a long and tortuous development process, from the exogenicity of money supply to the endogeneity of money supply, from money demand to the occurrence of credit loans, and the research mainly takes the bank as the research object, such as investment, saving as a core variable, based on the theory of the IS -LM model .Previous studies hold that the interest rate itself is not based on any existence, but is externally qualitative. The interest rate changes according to the actual situation of money supply and demand. By taking the real economy as the scope, this paper will deeply explore the endogenous formation mechanism of interest rate, take the inventory increment of manufacturers and the production input of enterprises as the research object, and establish the interest rate determination mechanism model of endogenous money supply and demand in the long term. At the same time, the influence of industrial circulation on money supply and demand and the degree of response to monetary policy are analyzed through theoretical model and empirical study, so as to study a series of transmission processes of the ectogenicity determination mechanism of interest rate. theoretical model As credit creation is playing an increasingly important role in contemporary society, financial derivatives have become an important source of funds. Enterprises of manufacturing can carry out financing through a variety of forms. After credit creation, enterprises of manufacturing gradually adopt a series of diversified financing methods such as bonds and stocks. At the same time, in order to actively expand their business, the major banks continue to meet the needs of consumers through credit cards and other ways, and the credit system continues to expand. With the continuous growth of consumer demand, the demand for money is also gradually increasing. The development of online trading system has led to the continuous emergence of online lending platforms, such as Ant Huabei. In fact, the monetary demand and the credit creation system are symbiotic, and the two show endogenous consistency for a long time. As the demand for manufacturing continues to expand, it proves to be the demand for money. First of all, the purchase of factors of production needs to cost part of monetary capital. It basically includes the input of the element such as raw materials, labor forces and fixed asset depreciation. Secondly, since interest is generated when borrowing from banks and other financial institutions, interest, as a debt repayment production input, also needs to cost part of monetary capital. Therefore, this paper builds a theoretical model based on the demand for money and money supply from the perspective of manufacturers. A firm's demand for money depends on the input of each period of prepaid capital, � � � ,in which the input of each period of prepaid capital is represented as constant capital and variable capital. The supply of money comes from the sale of goods in each period, � � �. Because the manufacturers will have an imbalance of payments in a certain period of time, for each individual, � � � , becomes normal. Due to the surplus of the production and sales links, the enterprise will use part of the capital to maintain its normal operation, and the rest can be saved or invested. At this time, the bank and other financial institutions will carry out a new round of financing. Therefore, although there is a currency imbalance at this time, it is only short-term and local in fact, and does not exist in the whole market. In terms of the long-term equilibrium point of view, the money market is still in equilibrium. In a certain period of time, due to the influence of some factors, the supply is greater than the demand, the growth of the number of goods is greater than the number of input of commodity production factors, the sales of commodity circulation is in a favorable stage, and the interest rate drops. When the supply is less than the demand, the circulation of goods is in a worsening stage, and interest rates rise. A long-term interest rate equation is created as: � � �Γ � �,where Γ � is the measure of the circulation at time t: � is the input of prepaid capital of the enterprise, and � is the sales volume. According to the property of function, it can be obtained: . When Γ � >0, � . � �, , � goes up, so the antiderivative is increasing, >0, if � is product output. The result will be: � � empirical analysis The monthly data of China from 2003 to 2019 were selected for empirical analysis. Month-on-month CPI inflation is expressed as ;Yields on Chinese one-year government bonds. Total wages of industrial employees and total investment in industrial fixed assets (the sum of the two) are expressed as � ; Ending industrial inventory of goods as ∆ � .Data were revealed on the National Bureau of Statistics website and estimated using Eviews8.0. The model equation shows following: In the theoretical model, the production and sales of goods affect the interest rate through the endogenous money supply and demand rules. In reality, the interest rate can also affect the production and sales of goods through investment and consumption. Firstly, ADF test is adopted to test the unit root of variables, so as to describe the dynamic relationship between variables through the VAR model. In this ADF test, variables y, � and � are all horizontally stable. The results of ADF test are shown in Table 1 According to the test results, there is only one cointegration relationship between the three variables, and the number of co-integration vectors is less than the number of variables, so it is necessary to add the error correction model VEC model for estimation. According to the model estimation results, the VEC model has a good fitting property, and the T value of the related major variables is tested significantly in the confidence interval of 90%, as shown in Table 3 below: .749813 According to the y of the pulse response of � and � respectively, in the first place when the inventory incremental ratio increases the � , dy dropped to 2 interval first, then quickly rose to the third interval, and quickly fell down to the fourth interval, within the range of 5 to start the relatively gentle fluctuation, fall within the range of 8, 9 range from then began rising, achieve smooth. And in view of the impact on � , dy significantly decreased to 3 interval, then dy due to inflation in the third interval appeared rose slightly, then decreasing due to the current production capital, leading to the endogenous monetary demand is greater than exogenous money demand, dy dropped to sixth interval, then rose to 7 interval, reached a stable within the range of 8.It can be seen that for the impulse response of Y to � and � , the inventory ratio increases ( � increases); In the long run, the increase of production capital input ( � decrease) can have a positive impact on the increment of interest rate. See Figure 1 In the long run, the coefficient of influence on y is positive, and the coefficient of influence on y is negative. It is roughly consistent with the direction of coefficient prediction of A and B in the theoretical model. From the perspective of impulse response, exogenous factors have temporary effects on interest rate, while endogenous shocks of actual economic circulation will have long-term effects on interest rate, which is in line with the conclusion set by the theoretical model. According to various tests, although there is one cointegration equation and VEC has conducted error correction test, it is found that although autocorrelation can be rejected, heteroscedasticity cannot be rejected, indicating that important variables are missing in this model. This may be caused by the following reasons: First, because this paper is based on the assumption of constant productivity, product factor input rate and product output rate, which are contradictory to the reality. Secondly, due to the continuous emergence of financial innovations and other exogenous factors, the connotation of circulation has changed. In order to study the dynamic characteristics of the influence of circulation condition on interest rate more comprehensively, the state space model can be introduced to estimate it, and the state variables that are difficult to be observed in practice can be included in the observable model to analyse the change trend of the relationship between the variables. After relevant premise tests and other calculations, the statespace model measurement can be established. The measurement method is as follows: Table3. VEC error correction model The estimated results are shown in Table 4 From the point of the time series of state variables, in the past six years, the enterprise production cost of the current period into the reaction coefficient of the ups and downs, but because our country monetary policies change more frequently since 2015, during the period of 2015-2017, the People's Bank of China cut RMB loan and deposit rates in many financial institutions, in order to further reduce the enterprise financing costs, China's economy the influence of exogenous intervention factors occupy the dominant positions, the actual circulation status of relative influence on interest rates, increasingly sharp, but its rebound from the 2017 years, beginning in 2018 our China's country money market rates overall goes downward, The medium -and long-term liquid supply is gradually enhanced. The coefficient tends to rise steadily and fluctuates within the range of 2.0, indicating that the effectiveness of exogenous conflict intervention has gradually weakened in the past two years. Conclusion The long-term interest rate model reveals the internal relationship between the circulation status of the real economy and the long-term interest rate. Taking the data of China from 2003 to 2019 as an example, this paper conducts an empirical analysis on the influence mechanism of the long-term interest rate by constructing the error-corrected VEC model and the state-space model. The results show that exogenous factors such as monetary policy have a certain influence on interest rate in the short run, while in the long run, interest rate is affected by the average circulation of goods described by the inventory increment of manufacturers and the actual production input of enterprises. According to the observation of the model, it is found that the influence mechanism of longterm interest rate described in the model is weakened due to the expansionary monetary policy adopted in China in recent years, the increasing innovation types of financial derivatives, and the diversification of electronic payment and overseas payment methods. According to the parameter analysis, after the people's Bank of China lowered the benchmark interest rates of RMB loans and deposits of financial institutions, the influence mechanism of money supply and demand interest rates determined by the circulation of the real economy gradually expanded.
2021-05-22T00:03:07.881Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "3721255a672bfa3a691babf4109afea50cdef453", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2021/27/e3sconf_ictees2021_01081.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "f969ab65d31919edee29c918e0bc898fd7a58827", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Economics" ] }
2649913
pes2o/s2orc
v3-fos-license
PICONE TYPE FORMULA FOR NON-SELFADJOINT IMPULSIVE DIFFERENTIAL EQUATIONS WITH DISCONTINUOUS SOLUTIONS As the impulsive differential equations are useful in modelling many real processes observed in physics, chemistry, biology, engineering, etc., see [1, 11, 13, 20, 21, 22, 25, 26, 27], there has been an increasing interest in studying such equations from the point of view of stability, asymptotic behavior, existence of periodic solutions, and oscillation of solutions. The classical theory can be found in the monographs [9, 18]. Recently, the oscillation theory of impulsive differential equations has also received considerable attention, see [2, 14] for the Sturmian theory of impulsive differential equations, and [15] for a Picone type formula and its applications. Due to difficulties caused by the impulsive perturbations the solutions are usually assumed to be continuous in most works in the literature. In this paper, we consider second order non-selfadjoint linear impulsive differential equations with discontinuous solutions. Our aim is to derive a Picone type identity for such impulsive differential equations, and hence extend and generalize several results in the literature. We consider second order linear impulsive differential equations of the form Introduction As the impulsive differential equations are useful in modelling many real processes observed in physics, chemistry, biology, engineering, etc., see [1,11,13,20,21,22,25,26,27], there has been an increasing interest in studying such equations from the point of view of stability, asymptotic behavior, existence of periodic solutions, and oscillation of solutions.The classical theory can be found in the monographs [9,18].Recently, the oscillation theory of impulsive differential equations has also received considerable attention, see [2,14] for the Sturmian theory of impulsive differential equations, and [15] for a Picone type formula and its applications.Due to difficulties caused by the impulsive perturbations the solutions are usually assumed to be continuous in most works in the literature.In this paper, we consider second order non-selfadjoint linear impulsive differential equations with discontinuous solutions.Our aim is to derive a Picone type identity for such impulsive differential equations, and hence extend and generalize several results in the literature. We consider second order linear impulsive differential equations of the form where ∆z(t) = z(t + )−z(t − ) and z(t ± ) = lim τ →t ± z(τ ).For our purpose, we fix t 0 ∈ R and let I 0 be an interval contained in [t 0 , ∞).We assume without further mention that EJQTDE, 2010 No. 35, p. 1 (i) {p i }, {p i }, {q i } and {q i } are real sequences and {θ i } is a strictly increasing unbounded sequence of real numbers; (ii) k, r, p, m, s, q ∈ PLC(I 0 ) := h : Note that if z ∈ PLC(I 0 ) and ∆z(θ i ) = 0 for all i ∈ N, then z becomes continuous and conversely.If τ ∈ R is a jump point of the function z(t) i.e. ∆z(τ ) = 0, then there exists a j ∈ N such that θ j = τ .Throughout this work, we denote by j τ , the index j satisfying θ j = τ . By a solution of the impulsive system (1.1) on an interval I 0 ⊂ [t 0 , ∞), we mean a nontrivial function x which is defined on I 0 such that x, x ′ , (kx ′ ) ′ ∈ PLC(I 0 ) and that x satisfies (1.1) for all t ∈ I 0 . The Main Results Let I be a nondegenerate subinterval of I 0 .In what follows we shall make use of the following condition: k(t) = m(t) whenever r(t) = s(t) for all t ∈ I. (C) We will see that condition (C) is quite crucial in obtaining a Picone type formula as in the case of nonimpulsive differential equations.If (C) fails to hold then a device of Picard is helpful.The Picone type formula is obtained by making use of the following Picone type identity, consisting of a pair of identities. Proof.Let t ∈ I.If t = θ i , then we have Rearranging we get EJQTDE, 2010 No. 35, p. 3 If t = θ i , then the computation becomes more involved.We see that Proof.Using (2.1) and (2.2), and employing Lemma 2.1 with where we easily see that (2.3) holds. The following corollary is an extension of the classical comparison theorem of Leighton [ (2.6) As ǫ → 0 + the left-hand side of (2.6) tends to Using (2.5) and (2.7) in (2.6) we get and similarly, if x(t) is continuous at t = a (i.e.x(a ± ) = 0) but not at t = b, then As a consequence of Theorem 2.2 and Corollary 2.1, we have the following oscillation criterion. Corollary 2.3.Suppose for any given T ≥ t 0 there exists an interval (a, b) ⊂ [T, ∞) for which either the conditions of Theorem 2.2 or Corollary 2.1 are satisfied, then every solution y of (1.2) is oscillatory. Device of Picard If the condition (C) fails to hold, then we introduce the so called device of Picard [16] (see also [7, p. 12]), and thereby obtain different versions of Corollary 2.1. Clearly, for any h ∈ PLC(I), It is not difficult to see that EJQTDE, 2010 No. 35, p. 7 Assuming r ′ , s ′ ∈ PLC(I), and taking h = (r − s)/2 we get Thus we obtain the following results in a similar manner as in the previous section. Theorem 3.1.Let r ′ , s ′ ∈ PLC(I) and x be a solution of (1.1) having two consecutive generalized zeros a and b in I. Suppose that are satisfied for all t ∈ [a, b], and that and that the inequalities in (3. Further results The lemma below, cf.[2, Lemma 1.] and [14,Lemma 3.1.],is used for comparison purposes.The proof is a straightforward verification.Lemma 4.1.Let ψ be a positive function for t ≥ α with ψ ′ , ψ ′′ ∈ PLC[α, ∞), where α is a fixed real number.Then the function is a solution of equation where a j j ∈ PLC[α, ∞), j = 0, 1, 2, {e i } and {ẽ i } are real sequences, with In view of Lemma 4.1 and Corollary 2.2, we can state the next theorem. If one of the inequalities in (4.9) and (4.10) is strict, then every solution x of (1.1) is oscillatory. It is clear that an impulsive differential equation with a known solution can be used to obtain more concrete oscillation criteria.For instance, consider the impulsive differential equation It is easy to verify that x(t) = x i (t), where x i (t) = (e + i)(t − i) + i e t−i , t ∈ (i − 1, i], (i ∈ N), is an oscillatory solution with generalized zeros τ i = i and ξ i = i(e + i − 1)(e + i) −1 ∈ (i − 1, i).Indeed, x(τ i )x(τ + i ) < 0 and x(ξ i ) = 0, i ∈ N. Applying Corollary 2.2, we easily see that equation (1.1) with θ i = i is oscillatory if there exists an n 0 ∈ N such that, for each fixed i ≥ n 0 and for all t ∈ (i − We finally note that it is sometimes possible to exterminate the impulse effects from a differential equation.In our case, if the condition The oscillatory nature of x and z are the same.However, the restriction imposed is quite severe. 10, Corollary 1].Theorem 2.2 (Leighton type comparison).Let x be a solution of (1.1) having two generalized zeros a, b ∈ I. Suppose that (C) holds, and that b a )Corollary 3 . 1 . for all i for which θ i ∈ [a, b].If either (3.1) or (3.2) is strict in a subinterval of [a, b], or one of the inequalities in (3.3) is strict for some i, then every solution y of (1.2) must have at least one generalized zero in [a, b].Suppose that the conditions (3.1)-(3.2) are satisfied for all t ∈ [t * , ∞) for some integer t * ≥ t 0 , and that the conditions in (3.3) are satisfied for all i for which θ i ≥ t * .If r ′ , s ′ ∈ PLC[t * , ∞) and one of the inequalities (3.1)-(3.3) is strict, then (1.2) is oscillatory whenever a solution x of (1.1) is oscillatory.Theorem 3.2 (Leighton type comparison).Let r ′ , s ′ ∈ PLC[a, b] and x be a solution of (1.1) having two generalized zeros a, b ∈ I such that b a q 3 )Corollary 3 . 2 . hold for all i for which θ i ∈ [a, b].Then every solution y of (1.2) must have at least one generalized zero on [a, b].From Theorem 3.1 and Theorem 3.2, we have the following oscillation criterion.Suppose for any given t 1 ≥ t 0 there exists an interval (a, b) ⊂ [t 1 , ∞) for which either the conditions of Theorem 3.1 or Theorem 3.2 are satisfied, then (1.2) is oscillatory.EJQTDE, 2010 No. 35, p. 8 Definition 1.1.A function z ∈ PLC(I 0 ) is said to have a generalized zero at t = t * if z(t + ∈ I 0 .A solution is called oscillatory if it has arbitrarily large generalized zeros, and a differential equation is oscillatory if every solution of the equation is oscillatory. * )z(t * ) ≤ 0 for t * ∈ [a, b], and that (2.5) holds for all i for which θ i ∈ [a, b].If either (2.10) or (2.11) is strict in a subinterval of [a, b], or one of the inequalities in (2.5) is strict for some i ∈ N, then every solution y of (1.2) must have at least one generalized zero on [a, b].Suppose that the conditions (2.10)-(2.11)are satisfied for all t ∈ [t * , ∞) for some integer t * ≥ t 0 , and that (2.5) is satisfied for all i for which θ i ≥ t * .If one of the inequalities in (2.5) or in (2.10)-(2.11) is strict, then every solution y of (1.2) is oscillatory whenever a solution x of (1.1) is oscillatory.
2014-10-01T00:00:00.000Z
2010-01-01T00:00:00.000
{ "year": 2010, "sha1": "823af16f633ef793518febeee1cf2431ca220a96", "oa_license": "CCBY", "oa_url": "https://doi.org/10.14232/ejqtde.2010.1.35", "oa_status": "GOLD", "pdf_src": "CiteSeerX", "pdf_hash": "823af16f633ef793518febeee1cf2431ca220a96", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
259180005
pes2o/s2orc
v3-fos-license
Dysregulation of SWI/SNF Chromatin Remodelers in NSCLC: Its Influence on Cancer Therapies including Immunotherapy Lung cancer is the leading cause of cancer death worldwide. Molecularly targeted therapeutics and immunotherapy revolutionized the clinical care of NSCLC patients. However, not all NSCLC patients harbor molecular targets (e.g., mutated EGFR), and only a subset benefits from immunotherapy. Moreover, we are lacking reliable biomarkers for immunotherapy, although PD-L1 expression has been mainly used for guiding front-line therapeutic options. Alterations of the SWI/SNF chromatin remodeler occur commonly in patients with NSCLC. This subset of NSCLC tumors tends to be undifferentiated and presents high heterogeneity in histology, and it shows a dismal prognosis because of poor response to the current standard therapies. Catalytic subunits SMARCA4/A2 and DNA binding subunits ARID1A/ARID1B/ARID2 as well as PBRM1 were identified to be the most commonly mutated subunits of SWI/SNF complexes in NSCLC. Mechanistically, alteration of these SWI/SNF subunits contributes to the tumorigenesis of NSCLC through compromising the function of critical tumor suppressor genes, enhancing oncogenic activity as well as impaired DNA repair capacity related to genomic instability. Several vulnerabilities of NSCLCS with altered SWI/SNF subunits were detected and evaluated clinically using EZH2 inhibitors, PROTACs of mutual synthetic lethal paralogs of the SWI/SNF subunits as well as PARP inhibitors. The response of NSCLC tumors with an alteration of SWI/SNF to ICIs might be confounded by the coexistence of mutations in genes capable of influencing patients’ response to ICIs. High heterogenicity in the tumor with SWI/SNF deficiency might also be responsible for the seemingly conflicting results of ICI treatment of NSCLC patients with alterations of SWI/SNF. In addition, an alteration of each different SWI/SNF subunit might have a unique impact on the response of NSCLC with deficient SWI/SNF subunits. Prospective studies are required to evaluate how the alterations of the SWI/SNF in the subset of NSCLC patients impact the response to ICI treatment. Finally, it is worthwhile to point out that combining inhibitors of other chromatin modulators with ICIs has been proven to be effective for the treatment of NSCLC with deficient SWI/SNF chromatin remodelers. Introduction Lung cancer is the leading cause of cancer death worldwide. In 2022, more than thirteen thousand patients died of lung cancer in the United States alone [1]. The poor prognosis is attributed to the high morbidity and low response rate to conventional chemotherapy because the majority of patients with lung cancer are only diagnosed at an advanced stage. Further, 85% of lung cancers are categorized by their histological types as a non-small cell lung cancer (NSCLC) subtype [2]. Targeted therapy revolutionized the clinical care of NSCLC patients. Compared with conventional chemotherapy, which kills all the dividing cells, including normal cells, Table 1. List of Immune checkpoint inhibitors approved by FDA for the treatment of NSCLC. Ref. [9]. Name Target Year of Approval Approved Clinical Indication Nivolumab PD-1 2022 Nivolumab + chemotherapy as neoadjuvant treatment 2020 Nivolumab + ipilimumab + limited chemotherapy as 1st treatment of metastatic or recurrent NSCLC 2020 Nivolumab + ipilimumab as 1st treatment of Metastatic NSCLC (PD-L1 ≥ 1%) 2015 Advanced (metastatic) NSCLC progressed during or after platinum-based chemotherapy. 2015 Advanced (metastatic) squamous NSCLC with progression on or after platinum-based chemotherapy. Pembrolizumab PD-1 2018 Pembrolizumab +chemotherapy for the 1st treatment of metastatic squamous NSCLC 2017 Pembrolizumab + chemotherapy for the 1st treatment of metastatic non-squamous NSCLC, ± PD-L1 2016 Metastatic NSCLC (PD-L1 ≥ 50%)) without EGFR /ALK genomic tumor aberrations 2015 Advanced NSCLC progressed after other treatments and with tumors that express PD-L1 Cemiplimab PD-1 2022 Cemiplimab + chemotherapy as 1st treatment for advanced NSCL Atezolizumab PD L-1 2021 Adjuvant treatment following surgery and chemotherapy for stage II-IIIA NSCLC (PD-L1 ≥ 1%) 2020 1st treatment for NSCLC with PD-L1 expression no EGFR/ALK genomic tumor aberrations 2019 Atezolizumab + chemotherapy for the 1st treatment of NSCLC no EGFR/ALK aberrations. 2018 Atezolizumab +bevacizumab+chemotherapy 1st treatment of metastatic NSCLC no EGFR/ALK aberrations 2016 Metastatic NSCLC who have disease progression during/following chemotherapy Durvalumab PD L-1 2022 Durvalumab + Tremelimumab + chemotherapy for the treatment of metastatic NSCLC 2018 Unresectable stage III not progressed NSCLC after treatment with chemotherapy and radiation Ipilimumab CTLA-4 2020 Ipilimumab+nivolumab + chemotherapy as 1st treatment of metastatic or recurrent NSCLC 2020 Ipilimumab + nivolumab as 1st treatment of NSCLC (PD-L1 ≥ 1%) Tremelimumab CTLA-4 2022 Tremelimumab + Durvalumab+chemotherapy for the treatment of metastatic NSCLC ICIs have greatly improved the overall survival of a subset of NSCLC patients. However, the majority of NSCLC patients do not respond very well (because of primary resistance), and a substantial portion of patients who do respond initially will eventually develop acquired resistance [13]. Several outstanding questions are imperative for the field to answer, such as: (1) the mechanisms of response and resistance to immunotherapy; (2) the identification of biomarkers to predict response or resistance; and (3) how to overcome this resistance. The field has been uncovering the mechanisms of resistance, such as inadequate neo-antigens expressed in the tumor cells, impaired processing and presentation of tumor antigens to the T lymphocytes, limited T-lymphocyte infiltration into the TME (tumor micro-environment), compromised function of effector T cells by impaired interferon signaling, proficient immune suppressive cells, T-cell exhaustion, etc. [14]. We have been discovering the factors related to the response as well; some of them were evaluated as predictive biomarkers in clinical studies, such as PD-L1 expression (tumor and immune cells), high tumor mutational burden (TMB), and mismatch repair deficiency or microsatellite instability in the tumor biopsy. Although these markers are able to guide clinical decision making, they are deemed incomplete, and it has become apparent that single markers cannot adequately represent the tumor biology in relation to response or resistance [15]. Hence, a comprehensive understanding of tumor biology, the tumor microenvironment, and other host factors, such as the microbiome, will be required for the accurate prediction of prognosis or response to immunotherapy [16]. Emerging studies suggest that epigenetics, particularly chromatin regulators, play a significant role in tumor biology and the functions of immune cells in TME [17]. In this review, we will first briefly discuss the basic biology of cancer epigenetics, followed by a discussion of the composition and module structures of human SWI/SNF chromatin. Then, we will review the recent findings about the mechanisms by which deregulation of SWI and SNF subunits contributes to the tumorigenesis of NSCLC. Next, we will discuss the vulnerability and strategies for targeted therapies for NSCLC with deregulation of SWI/SNF complexes. Lastly, we will discuss the relationship between the efficacy of the ICI treatment and the loss of the human SWI/SNF complex in patients with NSCLC. Cancer Epigenetics and SWI/SNF Chromatin Remodelers Dysregulated gene expression is critical for the initiation and progression of all malignancies [18]. In eukaryotes, DNA is tightly compacted with histone proteins in the nucleus. Its length is condensed from the original 2 m to as short as 10 µm to fit into the nucleus. The complex, consisting of DNA and histone proteins, constitutes chromatin. The basic structure of chromatin is a nucleosome: a 146 bp DNA fragment wrapped around a histone octamer consisting of an H2A dimer, an H2B dimer, and an H3 and H4 tetramer attached with two H1 globular proteins and various lengths of linker DNA. Multiple nucleosomes in the DNA present a "beads on a string" appearance. The nucleosome forms the 30 nM fiber secondary structure of chromatin, which compacts further into higher-order chromatin structures hierarchically [19]. The compactness of the chromatin obstructs regulatory proteins from accessing DNA, which is essential during biological processes, such as DNA transcription, DNA replication, and DNA-damaging repairs. Therefore, chromatin structure, especially the distribution of nucleosome over DNA, needs to be regulated dynamically to maintain an appropriate "openness" of DNA for various regulatory proteins [20]. There are three major chromatin regulators: covalent chromatin regulators, noncoding RNAs, and non-covalent chromatin regulators. They work together to determine a specific chromatin structure in a cell. Covalent chromatin regulators regulate DNA methylation and demethylation, histone acetylation and deacetylation, histone methylation and demethylation, etc. Non-coding RNA is involved in the process of histone modification [21]. Non-covalent chromatin regulators regulate nucleosome sliding, nucleosome ejection, nucleosome assembly, nucleosome editing, and variant histone replacement. There are four non-covalent chromatin regulators: switch/sucrose non-fermentable (SWI/SNF), imitation switch (ISWI), chromodomain helicase DNA binding (CHD), and INO80. They are evolutionarily conserved chromatin remodeling complexes that have different subunit compositions, and each plays a non-redundant role in executing ATP-dependent chromatin remodeling [22]. Covalent chromatin regulators change marks on the chromatin by increasing ("writing") or decreasing ("erasing") post-translational modifications of histones. These marks on the histone (histone hint) will be recognized ("read") by non-covalent chromatin regulators. Therefore, the specific chromatin structure of each cell, i.e., the epigenome, is the result of collaboration among different chromatin regulators. The epigenome determines which genes in the genome will be expressed. Similar to the genetic phenotype, the epigenome of a cell can be inherited from a parent cell to its descendent cells. In contrast to the genetic regulation, the epigenome of a cell is not stored in DNA. A unique epigenome exists in each cell, depending on the differentiation, development, and proliferation status of the cell, while all of the cells should contain the same DNA sequence, as long as they are from the same entity. Mutations and deregulation in the epigenetic regulators happen frequently in malignancies. For example, the overexpression of DNA methyltransferases (DNMTs) and histone deacetylases (HDACs) is frequently found in various malignancies. Targeting epigenetic regulators for cancer therapeutics, especially for the treatment of hematological malignancies, has attracted extensive attention. At least nine drugs targeting epigenetic regulators have been approved by the FDA for the treatment of hematological malignancies [ Table 2], and their potential for the treatment of solid cancers has been investigated in clinical trials [17]. It is noteworthy that hydralazine, previously approved for the treatment of hypertension [23], has been shown to have inhibitory properties on DNA methylation [24] and has been tested for the treatment of prostate cancer [25]. Alterations in chromatin remodeling complexes are commonly found in many malignancies [20], and the frequency of alterations or mutations is the highest in the SWI/SNF chromatin remodeling complex. It is possibly related to the unique ability of the SWI/SNF complex to slide or eject nucleosomes from chromatin, rendering chromatin more accessible for regulatory proteins and RNA. It has been found that 20% of human malignancies contain at least one mutation in the subunits of SWI/SNF chromatin remodelers [27]. Table 2. List of epigenetic agents approved by FDA. Refs. [17,26]. Name Year of Approval The Composition and Modular Structure of Human SWI/SNF Chromatin Remodelers SWI/SNF chromatin remodelers are macromolecular complexes assembled by various subunits. These subunits are encoded by 29 different genes. Each SWI/SNF complex consists of 10 to 15 subunits. Discrete combinations of various subunits together with multiple splicing variances of the subunits generate a great number of different SWI/SNF chromatin remodelers. Each SWI/SNF complex contains one mutually exclusive catalytic subunit, either SMARCA4 or SMARCA2 (with an alias of BRG1 or BRM, respectively). Thus, the SWI/SNF complex is also called the BRG1/BRM-associated factor (BAF). Depending on the different combinations of the subunits, human BAFs are classified into canonical BAF (cBAF), polybromo-associated BAF (PBAF), and noncanonical BAF ( Table 3). The SWI/SNF complexes frequently bind to the enhancers and promoters of their targets. All the SWI/SNF complexes utilize energy provided by the catalytic subunit through ATP hydrolysis to remodel chromatin structure through nucleosome sliding and eviction mechanisms [32]. The recent results of the cryo-electron microscopy studies suggest that all the subunits in the human SWI/SNF complex are organized into a similar "clamp"-like three-module structure to interact with their subject, the nucleosome [32]. For example, the cBAF complex consists of the adenosine triphosphatase (ATPase) module, the actin-related protein (ARP) module, and the base module. Within the ATPase module, the C-terminal of SMARCA4 grasps the nucleosome. The ARP module bridges between the ATPase module and the base module. Within the base module, SMARCB1 binds to the nucleosome, and ARID1A/B stabilizes the base module by binding to SMARCB1, the N-terminal of SMARCA4, and all other base units, which is required for efficient nucleosome sliding activity of the cBAF [33]. Subunits in the PBAF complex are organized into a tripartite modular structure, just like those in the cBAF complex. The three modules of the PBAF complex are the ATPase module, the ARP module, and the substrate recruitment module (SRM). The ATPase module and ARP module of the PBAF play similar functions as in the cBAF complex. As a homolog of ARID1A/B in cBAF, ARID2 is essential to the activity of PBAF to slide nucleosomes. Differently from cBAF, there are three unique histone binding subunits, PHF10, PBRM1, and BRD7, in PBAF, forming a submodule [34]. The PHF10 subunit binds to the histone tails through its plant homeodomain (PHD) fingers, recognizing methylated and unmethylated histone H3K4; the BRD7 subunit binds to the histone tail through the bromodomain, recognizing the acetylated lysine residues of histone; PBRM1 binds to histone tails through a total of six bromodomains, recognizing acetylated lysine residuals of histones; and the bromo adjacent homology (BAH) domain, recognizing methylated histones and nucleosomes [28,29,[34][35][36]. Results of the cryo-electron microscopy studies in the mammalian cell also suggest that a similar module structure exists in human ncBAF. In contrast to cBAF and PBAF, no ARID domain subunits exist in ncBAF. The location and function of ARID-containing subunits are replaced by the GLTSCR1/1L subunits [32]. The elucidation of the structure of the SWI/SNF complexes will help in studying the importance of disease-associated mutations in the different subunits. The Most Common Dysregulation of SWI/SNF in NSCLC The composition of SWI/SNF chromatin remodelers is tissue-or cell-type-specific [37]. Results from many large-scale exome-wide sequencing studies showed that different tumor types exhibit specific SWI/SNF mutation patterns [38]. For example, almost all malignant rhabdoid tumors (MRTs) had an inactivated mutation of the SMARCB1 gene [39,40], and it was the only mutated SWI/SNF subunit found in MRT. To explore which SWI/SNF chromatin remodeler subunits are most likely altered in NSCLC, we queried 29 genes encoding for all the subunits of human SWI/SNF complexes for mutations in the NSCLC patients on the cBio Cancer Genomics Portal (http://cbioportal.org) dataset. Thus, 8854 patients and 11037 samples from 28 studies were included. The pool of NSCLC patients in the query covers all types of NSCLC, including squamous cell carcinoma, lung adenocarcinoma, and large-cell carcinoma. The top six subunits with the highest mutation frequencies are SMARCA4/2 (7% and 2.8%), the mutually exclusive catalytic subunits, ARID1A/ARID1B/ARID2 (6%, 4%, and 4%), the AT rich interactive domain-containing subunits, and PBRM1 subunits (2.5%) ( Table 4). Result from a combined 28 study with 8854 patients and 11037 samples in cBioPortal. 2201 (25%) of queried patients and 2404 (22%) of samples have mutations. Further classification of the type of mutation in the top six subunits showed that more than 40% of the mutations were missense, which could either positively or negatively influence the expression of the gene; SMARCA4, ARID1A, and ARID2 had the highest frequency of truncating mutations at 32.6, 50.8%, and 39.1%, respectively, which was usually correlated with the loss of protein expression. The result of our query showed that mutations of SMARCA4/A2, ARID1A/ARID1B/ARID2, as well as PBRM1 were the most common mutated SWI/SNF subunits in the population of NSCLC patients. These subunits are either catalytic subunits or histone/DNA-binding subunits. Loss of these subunits directly dampens the function of the SWI/SNF complexes in regulating the chromatin structure. The result was in agreement with that found in the literature [41][42][43]. ARID1A and ARID1B are highly homologous, mutually exclusive subunits in cBAF that can directly bind DNA. In PBAF, ARID2 and PBRM1 play similar roles in binding DNA. BRG1 and BRM are the only two mutually exclusive catalytic subunits. Their helicase ATPase domains provide energy for the sliding and eviction of nucleosomes from specific regions on the chromatin. cBAF has either BRG1 or BRM subunits. In addition, ARID1A, ARID2, BRG1, and PBRM1 are all bona fide tumor suppressors [27,38,44,45]. Mechanisms of Dysregulation of SWI/SNF Complexes Contributing to the Tumorigenesis of NSCLC SWI/SNF complexes are master regulators of gene expression. They usually bind to the regions of the genome containing critical cis-acting transcriptional regulatory elements, such as enhancers and promoters [46]. cBAFs are mainly found in the chromatin region with enhancers, while PBAFs are found in the region of the proximal promoter. Dysregulation of the regulatory role of SWI/SNF complexes on transcription dramatically shifts the transcriptome landscape of the cell, leading to abnormal differentiation and the development of multiple cell types [47,48], irregular cell cycle regulation [49], and compromised DNA-damaging repair processes [28]. Therefore, the consequences of the dysregulation of SWI/SNF complex-induced alterations of chromatin structure are profound, from neurological disorders to the death of embryos as well as tumorigenesis. A definitive answer to explain how exactly the dysregulation of these SWI and SNF subunits leads to tumorigenesis is not available [41]. However, the research findings relating the alteration of SWI/SNF complexes to decreasing function of some tumor suppressor genes and increased expression of some oncogenes, as well as weakened DNA damage repair, may provide some clues about the mechanisms of dysregulation of SWI/SNF complexes contributing to the tumorigenesis of NSCLC [41,50,51]. Decreased Function of Tumor Suppressor Genes via the Dysfunction of SWI/SNF Complexes The active status of enhancer elements of multiple tumor suppressor genes is maintained by the binding of SWI/SNF complexes [52]. Depletion of SWI/SNF complexes represses the expression of these genes. For example, loss of ARID1A was accompanied by decreased expression of several classical tumor suppressors, such as PIK3IP1 [53], CDKN1A [54], TGF-b [54], SMAD3 [55], and E2F4 [56]. In addition, the cell cycle regulation and cell growth inhibition actions of P53 and RB proteins depend on the SWI/SNF complex [57][58][59][60]. BRG1 and BRM can directly bind to the RB gene, facilitating its downstream targets responsible for cell cycle regulation and repression [61]. Increased Expression of some Oncogenes Caused by the Dysfunction of SWI/SNF Complexes c-Myc is a transcription factor that is involved in almost every aspect of the oncogenic process, such as facilitating cell cycle regulation, proliferation, and differentiation of cancer cells. Loss of BAG11 enhances the activity of the c-myc gene, which contains a superenhancer structure in human neuroblastoma cells [62,63] and melanoma cells [64]. Superenhancers were described as a class of regulatory regions with unusually strong enrichment for the binding of transcriptional co-activators [46]. While SWI/SNF complexes repress acetylation in common enhancers, they prevent hyperacetylation in super-enhancers [65]. In addition, depletion of SNF5, a core subunit in the cBAF and the PBAF, increased the interaction of c-myc with its targets on the chromatin and enhanced the activity of cmyc [46]. Oncogenic AP-1 transcription factors are another group of proteins whose activity is selectively maintained after the loss of SNF5 expression [46]. Results from chromatin accessibility analysis in lung adenocarcinoma cells from a genetically engineered mouse model (GEMM) of Kras LSL-G12D/+ and Trp53 fl/fl (KP) initiated with BRG1 showed that metastasis-derived tumor cells were enriched for peaks with AP-1 transcription factor motifs, while other tumor cells were depleted of AP-1 peaks, suggesting that AP-1 may be involved in the metastasis of the tumor [66]. Furthermore, the inactivating mutation of ARID1A also reactivates the repressed TERT transcriptional activity and renders growth advantage to cancer cells [54]. Impairment of DNA Damage Repair Pathways and Genomic Instability by the Dysfunction of SWI/SNF Complexes DNA is damaged by various endogenous and exogenous toxic chemicals. Doublestrand breaks (DSBs) are one of the most deleterious DNA lesions, with serious consequences if they are not repaired. Therefore, multiple DNA repair pathways exist to repair DSBs. The contribution of defective DSB capacity to tumorigenicity has been widely recognized. There is plenty of research data supporting the theory that dysfunction of SWI/SNF complexes can impair DNA damage repair ability and might contribute to tumorigenesis. Multiple studies have shown that nucleosomes can block nucleases at DNA ends [67,68] and that alteration of SWI or SNF will hinder the access of nucleases to the DNA, which is essential for DNA damage repair. For example, suppression of ARID1A reduces both non-homologous end joining (NHEJ) and homologous recombination (HR) repair pathways [69,70]. A study by Park et al. showed that dysregulation of SMARCA4 results in inefficient DNA double-strand break (DSB) repair and a defect in γ-H2AX phosphorylation after DNA damage, suggesting that the SWI/SNF complexes facilitate DSB repair, at least in part, by promoting H2AX phosphorylation on chromatin [71]. In addition, ADID1A promotes mismatch repair (MMR) by recruiting MSH2 [72], which is one of the MMR proteins. MMR is a system for recognizing and repairing DNA damage during DNA replication. Cells with an impairment in DNA MMR due to the loss of ADID1A usually have high TMB. Furthermore, ARID1A is also required to resolve transcription-replication conflicts. Otherwise, replication stress will ensue with an ARID1A deficiency. Activation of ATR (ataxia-telangiectasis-mutated and RAD3-related) and its downstream kinases, checkpoint kinases 1 and 2, will follow. Eventually, the replication cycle will be paused to resolve the conflict. Loss of ARID1A will cause the cells to become addicted to ATR activity [73]. Loss of SMARCA4 and ARID1A also impairs the function of topoisomerase II-alpha (TOP2A) and its crucial role in the decatenation of newly replicated sister chromatids during mitosis, which could also partially explain the high occurrence of mutations and high genetic instability in tumors harboring inactivating mutations of BRG1 and ARID1A. Genomic instability in NSCLC with a deficiency of SWI/SNF is also related to enhanced Aurora A activity, which is one of the group kinases with serine/threonine activity and plays a crucial role in spindle assembly machinery during cell mitosis [74]. For example, Tagal et al. found that inactivation of AURKA induces apoptosis and cell death in vitro and in xenograft mouse models of NSCLC cells [75]. Vulnerability of NSCLC with Dysregulation of SWI/SNF Complexes Inhibitors targeting synthetic lethality have been explored for therapeutic purposes in NSCLC treatment [73,[76][77][78][79]. Synthetic lethality occurs when the inhibition of two genes is lethal while the inhibition of each single gene is not. The advancement of biological techniques for gene knockout has greatly accelerated the process of the identification of synthetic lethality with the loss of SWI/SNF complex genes. EZH2 (enhancer of zeste homolog 2) is one of the most promising synthetic lethal targets identified with loss of SWI/SNF complexes [80]. EZH2, the catalytic subunit of polycomb repressive complex 2, silences gene expression through the methylation of lysine 27 on histone H3 (H3K27Me3) [53]. SWI/SNF and PRC2 complexes co-localize on the chromatin, antagonizing each other and playing opposite effects in the promotion of tumorigenesis [81]. Inactivating mutations of SWI/SNF derepress the activity of PRC2, and the cell adapts to the heightened PRC2 status. Recently, tazemetostat, the first oral EZH2 inhibitor, received FDA approval for patients with relapsed or refractory follicular lymphoma and advanced epithelioid sarcoma [82]. Various other EZH2 inhibitors are under clinical development, and there has been significant interest in combining EZH2 inhibitors with ICIs to overcome immunotherapy resistance by reprogramming the TME [83][84][85]. Another group of synthetic lethality targets is the mutually exclusive paralogs of the SWI/SNF complex subunit [86]. SMARCA4-deleted cancer cells are highly dependent on the paralog SMARCA2 for their survival, and ARID1B is required for the survival of ARID1A-depleted cells as well [13,76]. The ATPase domain and bromodomain in the BRG1 and BRM, as well as in the PMRM1, can be potentially used as druggable pockets to design small-molecular inhibitors. However, ARID domains containing SWI/SNF subunits, such as ARID1A, ARID1B, and ARID2, do not contain similar druggable pockets. The recently developed proteolysis-targeting chimera (PROTAC) technique circumvents the requirement for druggable pockets to target a gene [87]. PROTAC are pharmacological agents linking a binding ligand for the targeting gene with an E3 ubiquitin-protein ligase moiety. Different from other druggable pocket-based small-molecular inhibitors inhibiting the function of their target proteins, PROTACs transfer the ubiquitin onto the target protein first and initiate proteasomal degradation. For example, PRT3789 is a potent and selective BRMtargeted degrader. Preclinical experiments with PRT3789 demonstrated robust inhibition of cell proliferation in SMARCA4-deleted NSCLC in vitro and in PDX tumors ex vivo [88]. PRT3789 will soon be put into a phase I clinical trial for advanced or metastatic solid tumors [89]. In addition, dysfunction of ARID1A impairs multiple pathways functioning for DNA damage repair, such as DSB repair, MMR, and the resolution of stress induced by the transcription-replication conflict, etc. To compensate for the compromised DNA damage repair, the cancer cells with ARID1A deficiency have to depend on other redundant DNA repair pathways, such as poly-ADP ribose polymerase (PARP), to survive. It has been shown that NSCLC cells deficient in ARID2 are sensitive to the treatment of PARP inhibitors [90]. Comparably, cancer cells with ARID1A deficiency are also susceptible to the inhibition of ATR activity. The significantly lower expression of cyclin D1 in BRG1-defient NSCLC cells compared with BRG1-intact normal control cells makes cyclin D1 as well as CDK4/6 proteins potential synthetic lethal targets for the treatment of NSCLC [91]. Finally, it was found that BRG1-deficient NSCLC cells have a reduced transcriptional response to energy stress and depend more on the oxidative phosphorylation pathway as their energy source. Thus, the oxidative phosphorylation pathway becomes a synthetic lethality target. IACS-010759, a potent inhibitor of the mitochondrial complex in the electron transport chain, has been shown to have an inhibitory effect on the growth of NSCLC cells [92]. Although there is no direct evidence to support the clinical use of these therapeutics targeting the vulnerabilities in NSCLC with SWI/SNF deficiency and the available data are mainly from the early stages of preclinical and phase I/II clinical studies in the general population of patients with NSCLC, we believe that data from many new studies will be pouring in soon about the efficacy and toxicity of the therapeutics for the treatment of the subset of NSCLC with SWI/SNF deficiency. As many as 32 clinical trials were identified by searching the clinicaltrials.gov website for therapeutics for the treatment of NSCLC patients with SWI/SNF deficiency ( Table 5). Most of the studies are in the stage of recruiting study subjects or are in preparation for the recruitment of study subjects. The results of these studies should be beneficial to understanding the clinical value of the subset of NSCLC patients with a deficient SWI/SNF complex. SWI/SNF Deficiency Influences the Immunogenicity of Malignancies Immunotherapy with ICIs has been one of the standards of care for patients with advanced or early-stage NSCLC who underwent surgery, concurrent chemotherapy and radiation treatment. Six ICIs have been approved by the FDA for NSCLC so far (Table 1). However, a significant portion of patients either do not respond or respond initially but then progress due to the development of acquired resistance [93]. Understanding the mechanisms of the cancer cells to avoid ICI-mediated T-cell cytotoxicity and identifying the crucial regulators of the process are urgently needed to enhance the efficacy of cancer treatment with ICIs. Dysfunction of the SWI/SNF complex occurs commonly in cancers of various histologies. Results from previous studies suggested that loss of SWI or SNF had a significant effect on the response and resistance of cancer patients to the ICI treatment. The influence of the SWI/SNF complex on tumor immunity is not only dependent on its role in regulating gene transcription but also on its cooperation with other epigenetic regulators, especially PRC 2. Thus, the SWI/SNF complex can modulate tumor immunogenicity through multiple mechanisms. Intrinsically, as a master regulator of gene transcription, deficiency of the SWI/SNF complex increases the expression of some isoforms of normal protein, which can function as neoantigens, thus increasing the possibility of tumor cells being recognized by an activated immune system. The SWI/SNF complex also plays an important role in the development of T lymphocytes and is crucial for the maturation of effector T lymphocytes [94][95][96]. Furthermore, SMARCA4 can upregulate the transcriptome of B cells during B-cell activation to promote cell proliferation [97]. For example, ARID1A deficiency will compromise the mismatch repair pathway, resulting in increased genomic instability and high TMB in the cell. PD-L1 protein was also highly expressed in ARID1A-deficient tumor cells [98], which predicts the level of the response to ICI treatment in lung cancer [16]. Inactivation of three members of the PBAF subfamily, PBRM1, Arid2, and Brd7, rendered mouse melanoma cells more sensitive to T-cell-mediated cytotoxicity in vitro and to ICI treatment in a mouse model in vivo [99]. A loss-of-function mutation in PBRM1 in kidney cancer was also associated with a better treatment response to PD-1 blockade [100]. As a non-covenant chromatin regulator, the SWI/SNF complex can cooperate with other epigenome regulators, such as PRC2, to maintain the landscape of the epitome of a cell. PRC2 is the covenant epigenetic regulator that deposits methyl groups onto the lysine 27 on histone H3 (H3K27mes) and represses its activity. Among the long list of gene targets under the repressive control of PRC2 are hundreds of IFN-γ-stimulated genes, cytokines, and receptors. The number of genes will be much greater in the cancer cells, specifically in the cancer cells with dysregulated SWI/SNF complexes, since the activity of PRC2 will be greatly elevated in the absence of the competition of the SWI/SNF complexes [101]. For example, in a tumor cell without the expression of ARID1A, the elevated methyl transferase activity of EZH2 will suppress the function of Th1-type chemokines and IFN-gresponsive genes by converting H3K27 to H3K27me3 on the Th1-type chemokines and the IFN response promoters. This results in limited CD8+ T-cell infiltration into the TME and a low immune response after treatment with ICI [102]. With regard to the immune microenvironment, SWI/SNF complexes are required for the expression of a large number of interferon (IFN)-inducible genes [103], including the induction of CIITA, the master regulator of major histocompatibility complex class II expression, which is essential for the induction of an effective immune response towards the tumor antigen [104]. SMARCA4 deficiency has been associated with elevated levels of tumor-infiltrating lymphocytes (TILs) [79,105], while ARID1A alterations were correlated with markedly high immune infiltrates in endometrial, stomach, and colon cancer but lower CD8+ T-cell infiltrations in ARID1A-mutant renal clear-cell carcinoma, indicating that the association between ARID1A alterations and immune infiltrates was cancer-dependent [31]. Adding another layer of complexity, a recent study reported that SMARCA4-deficient tumors were infiltrated by FOXP3+ (Forkhead box protein P3+) cells and neutrophils [106]. Both types of the two immune cells are important for the maintenance of an immunosuppressive environment favoring tumorigenesis [107,108]. Therefore, the effects of dysregulated SWI/SNF on tumor immunogenicity combine with other genetic/epigenetic factors and tissue/differentiation-specific factors to determine the final result of the response to different combinations of ICI treatment, either monotherapy or combined therapy (Figure 1). An alteration in the SWI/SNF chromatin remodeling complex may compen elevate PRC2 activity, which results in decreased expression of the mismatch rep MSH2. This may lead to increased tumor mutational burden and microsatellite in which will further promote the chance of neo-antigens to be presented by MH and, in turn, enhance T-cell recognition of the tumor antigen, potentially sensiti tumor cells to the anti-tumor effects of immune checkpoint inhibitors (PRC2: po repressive complex; MSI: microsatellite instability; TMB: tumor mutation burden cell receptor; MSH: mismatch repair gene) Outcome of ICI Treatment for NSCLC with Deficiency of SWI/SNF Deficiency of the SWI/SNF-complex impairs the ability of the cell to self-re differentiate. It is not surprising that NSCLC with loss-of-function mutations usu sues an aggressive clinical course with high heterogeneity and eventually ends wi dismal outcome [109]. To further explore the relationship between the deficienc SWI/SNF complex and response to the ICI treatment, we investigated the availa vant studies, as listed in Table 6. Table 6 lists the studies with the specification targeting subunits in the SWI/SNF complex and the outcome of the study. It is not that all the listed studies were retrospective except for one case report. An alteration in the SWI/SNF chromatin remodeling complex may compensatorily elevate PRC2 activity, which results in decreased expression of the mismatch repair gene, MSH2. This may lead to increased tumor mutational burden and microsatellite instability, which will further promote the chance of neo-antigens to be presented by MHC class I and, in turn, enhance T-cell recognition of the tumor antigen, potentially sensitizing the tumor cells to the anti-tumor effects of immune checkpoint inhibitors (PRC2: polycomb-repressive complex; MSI: microsatellite instability; TMB: tumor mutation burden; TCR: T-cell receptor; MSH: mismatch repair gene) Outcome of ICI Treatment for NSCLC with Deficiency of SWI/SNF Deficiency of the SWI/SNF-complex impairs the ability of the cell to self-renew and differentiate. It is not surprising that NSCLC with loss-of-function mutations usually pursues an aggressive clinical course with high heterogeneity and eventually ends with a very dismal outcome [109]. To further explore the relationship between the deficiency of the SWI/SNF complex and response to the ICI treatment, we investigated the available relevant studies, as listed in Table 6. Table 6 lists the studies with the specification of their targeting subunits in the SWI/SNF complex and the outcome of the study. It is noteworthy that all the listed studies were retrospective except for one case report. Results from a few studies suggested that NSCLC patients with a deficiency of SWI or SNF might be more sensitive to ICI treatment [109,110,[112][113][114][115]117]. After reviewing these studies, it was found that results from the relationship between the deficiency of the SWI/SNF complex and response to the ICI treatment is inconsistent, with some showing a positive association and others showing a negative association. In 2019, Naito T. reported that nivolumab could effectively reduce the metastasis of an SMARCA4-deficient lung adenocarcinoma for more than 14 months after the failure of three standard chemotherapy regimens. The tumor has a high TMB but no expression of the PD-L1 protein [113]. The results of another study carried out by Schoenfeld also support the idea that BRG1 can be a biomarker for ICI response. In the latter study, BRG1 deficiency was correlated with a higher overall response rate (ORR), accompanied by higher TMB, and lower PD-L1 expression compared with control patients [112]. However, this conclusion was not supported by the studies published by Dagogo Jack et al. and Alessi et al. [111,116]. One common factor in these two studies was that co-mutated genes were involved, where co-mutation of STK11/KEAP1 with BAG1 existed in all four cases in the study by Dagogo Jack and co-mutation of KRAS with BAG1. It is, therefore, reasonable to surmise that co-mutated genes need to be excluded before an SWI or SNF subunit can be evaluated as a biomarker for ICI treatment. The result from the study by Zhou et al. corroborated the assumption that the correlation between the alteration of BRG1 and longer progressive-free survival (PFS) was only present in the presence of "pure" alterations of BRG1 without STK11 or KEAP mutations [115,117]. It is well known that NSCLC cells harboring mutations of KRAS, P53, KEAP1, and STK11 are resistant to the ICI treatment [120]. Importantly, these genes are often highly mutated in NSCLC cells containing a deficiency of the SWI/SNF complex as well [100]. Thus, the value of the SWI/SNF complex deficiency as a biomarker of the ICI treatment for NSCLC should only be evaluated after excluding the influence of these co-mutated genes. Otherwise, the correlation disappeared or was minimized by the co-mutated genes. High heterogenicity in the tumor with BAG1 deficiency [121] might also be responsible for the seemingly conflicting results of ICI treatment in NSCLC patients with alterations of SWI or SNF. There are two studies assessing the relationship between loss of PBRM1 and response to ICI treatment. Results from the two studies consistently supported the notion that deficiency of the PBRM1 subunits might be a negative biomarker for the ICI response [118,119]. Interestingly, results from two reports about the relationship between deficiency of ARID subunits and response to ICI treatment showed that there was a positive correlation with higher median overall survival (MOS) and overall survival (OS) in NSCLC, with mutations of ARID1 accompanying a higher TMB and higher PD-L1 expression [115,117]. These results suggested that the alteration of different SWI/SNF subunits might have a unique impact on the response of NSCLC with deficient SWI/SNF subunits. Because the majority of the studies listed are retrospective and this may lead to biased conclusions, prospective studies among NSCLC patients are urgently needed to help answer this important question regarding the impact of an alteration in SWI or SNF on the response of a subset of patients to ICI treatment Perspectives Mutations of genes encoding for SWI/SNF chromatin remodelers occur commonly in NSCLC. Treatment of the subset of NSCLC with an alteration of SWI/SNF, which tends to be dedifferentiated with a very poor prognosis, has been explored extensively in recent years. Certain synthetic lethality in these NSCLC cells can be exploited as vulnerable targets for their treatment. While many therapeutics targeting these vulnerabilities, such as EZH2 inhibitors and PARP inhibitors, have been approved by the FDA, some of them, such as PROTACs and ATR inhibitors, are still under clinical development (Kymera Therapeutics, Inc., Watertown, MA, USA, Impact Therapeutics, Inc., Nanjing, China). Several reasons might contribute to the inconsistency in the results of studies about the relationship between the deficiency of the SWI/SNF complex and the response to the ICI treatment. Firstly, the response to ICIs might be confounded by the coexistence of mutations in other important tumor suppressors or oncogenes, such as co-mutated STK11, KEAP1, and P53, which can influence patients' response to the treatment with ICIs. Secondly, high heterogenicity in the tumor with SWI/SNF deficiency might also be responsible for the seemingly conflicting results in the studies about the response to ICI treatment in NSCLC patients and alterations in SWI/SNF. In addition, an alteration in each different SWI/SNF subunit might have unique effects on the response of NSCLC with deficient SWI/SNF subunits to treatment with ICIs. Results from a future prospective study using a larger cohort of NSCLC patients will be needed to resolve the inconsistency about the issue. Lastly, it is worth pointing out that combining inhibitors of other chromatin modulators with ICIs might be effective for the treatment of NSCLC with deficient SWI/SNF chromatin remodelers. As mentioned above, the elevated EZH2 activity in the tumor cell with the alteration of SWI and SNF subunits suppresses the function of Th1-type chemokines and IFN-g-responsive genes. One possibility is that the uncontrolled PRC2 activity may be partially responsible for the cold immunity in some SWI/SNF-deficient tumors, even though TMB was peculiarly high in some of these tumors [118,119]. Numerous studies have been carried out to test combination therapy to overcome the resistance of immunotherapy. Targeting SWI/SNF chromatin regulators can be a tantalizing strategy given the fact that SWI/SNF complexes might have a global effect on TME. The caveat is that the effect of these drugs targeting chromatin regulators might have heterogenous effects on various subsets of immune cells, so that the net effect of immune cell killing can be heterogenous and difficult to predict. We may be able to design a study using a new approach of antibody-drug conjugates [122] to target chromatin regulators on specific subsets of cells, and we hope that current and future clinical studies will shed light on this conundrum in the immunotherapy of NSCLC patients with SWI/SNF deficiency.
2023-06-16T14:53:36.095Z
2023-06-01T00:00:00.000
{ "year": 2023, "sha1": "5fd0860710f0295ec7df7d92ee2e54112b78d599", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "MergedPDFExtraction", "pdf_hash": "5fd0860710f0295ec7df7d92ee2e54112b78d599", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
25978772
pes2o/s2orc
v3-fos-license
Drosophila neurocalcin, a fatty acylated, Ca2+-binding protein that associates with membranes and inhibits in vitro phosphorylation of bovine rhodopsin. Neurocalcins belong to a family of neuronal specific EF hand Ca2+-binding proteins defined by recoverin. Previously, we reported the cloning and initial characterization of neurocalcin in Drosophila melanogaster (Teng, D. H.-F., Chen, C.-K., and Hurley, J. B. (1994) J. Biol. Chem. 269, 31900-31907). We showed that the Drosophila neurocalcin protein (DrosNCa) is expressed in neurons and that bacterially expressed recombinant DrosNCa (rDrosNCa) can be myristoylated. Here, we present two lines of evidence that DrosNCa is fatty acylated in vivo. First, the mobility of affinity-purified native DrosNCa on two-dimensional gel electrophoresis is identical to that of myristoylated rDrosNCa and distinct from that of nonacylated rDrosNCa. Second, the membrane binding properties of native DrosNCa are similar to those of myristoylated rDrosNCa; both of these proteins bind to membranes at 0.2 mM Ca2+, whereas nonacylated rDrosNCa always remains soluble. It has been shown that recoverin inhibits the phosphorylation of rhodopsin when Ca2+ is present (Kawamura et al., 1993) and that a dependent recoverin/rhodopsin kinase interaction underlies the inhibitory effect of recoverin (Chen et al., 1995). Given the similarities between recoverin and neurocalcin, we examined the effect of DrosNCa on rhodopsin phosphorylation. We find that rDrosNCa is capable of inhibiting bovine rhodopsin phosphorylation in vitro in a Ca2+-dependent manner. The inhibitory activity of rDrosNCa is enhanced by myristoylation, and the potency of its effect is similar to that of recoverin. Two other related EF hand proteins, guanylate cyclase-activating protein-2 and calmodulin, are only poor inhibitors in these phosphorylation assays. in vitro inhibition of rhodopsin phosphorylation therefore appears to be an assayable property of a subset of recoverin-like proteins. Neurons are highly specialized cells that have evolved elaborate mechanisms to sense, integrate, and transmit signals. The calcium ion (Ca 2ϩ ) is a universal second messenger that is a key player in neuronal signaling. Dynamic fluxes in the intracellular concentration of Ca 2ϩ during neuronal excitation, recovery, and adaptation or potentiation, are interpreted by Ca 2ϩ -binding proteins that act as transducers; the binding or release of Ca 2ϩ causes these proteins to switch into an "on" or "off " state in the regulation of effector activity. Many Ca 2ϩ -sensing transducers are members of the EF hand superfamily of Ca 2ϩ -binding proteins (Moncrief et al., 1990), one of the best characterized being calmodulin. In the past few years, a new and rapidly expanding EF hand subfamily has been defined by recoverin and its cognates (Dizhoor et al., 1991;Hurley et al., 1993). Electrophysiological analyses on mammalian recoverin and frog S-modulin reveal that these photoreceptor proteins prolong photoexcitation in high free Ca 2ϩ concentrations (Gray-Keller et al., 1993;Kawamura, 1993). Accumulated biochemical evidence suggests that recoverin/S-modulin delays photorecovery by preventing rhodopsin kinase from phosphorylating rhodopsin in high Ca 2ϩ conditions Chen et al., 1995;Klenchin et al., 1995). Indeed, Chen and co-workers (1995) have recently found that recoverin directly binds to rhodopsin kinase at high Ca 2ϩ levels. Recoverin is heterogeneously fatty acylated on its amino terminus , and this posttranslational modification enhances the association of recoverin with membranes in high Ca 2ϩ conditions . However, the presence of a fatty acyl moiety on recoverin is not necessary for its inhibition of rhodopsin kinase (Chen et al., 1995). Flaherty et al. (1993) have shown that the crystallographic structure of recoverin, with Ca 2ϩ bound in its EF3 site, is compact and that the protein is composed of two domains, each having one functional EF hand. The three-dimensional structure of recoverin also reveals the presence of a hydrophobic crevice composed of aromatic and aliphatic amino acid residues distributed in the N-terminal half of the protein that may be involved in interactions with a target factor(s). Interestingly, Polans et al. (1991) have found that recoverin is an antigen in the autoimmune response of people inflicted with cancer-associated retinopathy, but its role in this degenerative disease is unclear. Numerous vertebrate cognates of recoverin have been identified including visinin (Yamagata et al., 1990), multiple isoforms of neurocalcin (Hidaka and Okazaki, 1993;Terasawa et al., 1992), hippocalcin , and neuronal visinin-related proteins like Vilip (Lenz et al., 1992) and NVPs (Kajimoto et al., 1993). Protein sequence comparisons indicate that recoverin, S-modulin, and visinin define a distinct subclass that is expressed in photoreceptor cells. Another subclass of these neuronally expressed Ca 2ϩ -binding proteins is comprised of bovine neurocalcin, rat hippocalcin, rat NVPs, and chicken Vilip. The evolutionary diversity of these EF hand recoverin-like proteins is further exemplified by the recent discoveries of yet another subclass defined by GCAP-1 (Palczewski et al., 1994) and GCAP-2 (Dizhoor et al., 1995), two mammalian photoreceptor proteins that activate membrane guanylate cyclase when the free Ca 2ϩ levels are low. Two invertebrate homologues of recoverin have been re-ported to date: Drosophila frequenin and neurocalcin. Pongs et al. (1994) identified frequenin from studies on the Shaker-like V7 mutants of D. melanogaster. They proposed that the overexpression of frequenin in V7 mutants causes the augmented facilitation of neurotransmitter release at neuromuscular junctions. In addition, Pongs et al. reported that recombinant frequenin is capable of activating bovine rod outer segment membrane guanylate cyclase in a Ca 2ϩ -dependent manner in vitro, but the in vivo target of frequenin has not been established. In a search for homologues of recoverin in D. melanogaster, we discovered Drosophila neurocalcin (nca), a gene coding for a protein that is 88% identical to the bovine neurocalcin ␦ isoform on the primary sequence level (Teng et al., 1994). Initial characterization revealed that the Drosophila neurocalcin protein (DrosNCa) is expressed in neurons and that bacterially expressed recombinant DrosNCa could be myristoylated by Nacyl transferase. Here, we present evidences that native DrosNCa is fatty acylated in vivo and that this modification significantly enhances its Ca 2ϩ -dependent association with Drosophila membranes. In addition, we have found that at high Ca 2ϩ concentrations in reconstituted assays, both recombinant myristoylated and nonacylated DrosNCa are capable of inhibiting the phosphorylation of bovine rhodopsin by rhodopsin kinase, but the recombinant DrosNCa proteins neither stimulate nor inhibit mammalian photoreceptor membrane guanylate cyclase in high or low Ca 2ϩ conditions. Materials Phenyl-Sepharose CL-4B and CNBr-Sepharose were purchased from Pharmacia Biotech Inc. Protein quantitation reagent, ampholytes, and two-dimensional gel protein standards were obtained from Bio-Rad. [␥-32 P]ATP (3000 Ci/mmol) was purchased from DuPont NEN. ECL detection kit for Western analyses was procured from Amersham Corp. Myristoylated recombinant GCAP-2 was kindly provided by R. Hughes and A. Dizhoor. Methods Native gel electrophoresis and Western analyses were performed as described by Teng et al. (1994). Two-dimensional gel electrophoresis was done according to the protocols of Bio-Rad. Anti-rDrosNCa rabbit antibodies 6094 and 6515 were generated and affinity-purified as described previously (Teng et al., 1994). Purification of Recombinant Recoverin and Neurocalcin Myristoylated and nonacylated recombinant bovine recoverin (rBovRv) 1 and recombinant DrosNCa (rDrosNCa) were produced in Escherichia coli as described by Ray et al. (1992). rDrosNCa and rBovRv were isolated as described previously (Teng et al., 1994) with the following modification: proteins that were bound to phenyl-Sepharose CL-4B matrix in buffer containing 1 mM CaCl 2 were eluted with 10 mM Tris-HCl (pH 8.0), 1 mM MgCl 2 , 1 mM DTT, 5 mM EGTA, and 0.2 mM PMSF. Protein concentrations were quantified by using a Bio-Rad protein assay and by densitometry of SDS-polyacrylamide gels stained with Coomassie Blue. Isolation of Native DrosNCa Native neurocalcin was purified from Canton-S Drosophila adult heads using an AP6515 antibody affinity column. Fly heads were homogenized at 200 mg of tissue/ml in 50 mM Tris (pH 7.5), 2 mM MgCl 2 , 2 mM EGTA, 0.5 mM PMSF, 5 g/ml aprotinin, and 5 g/ml leupeptin. The homogenate was centrifuged at 1000 ϫ g at 4°C for 10 min. The supernatant was loaded onto an AP6515 column preequilibrated with TCMN buffer (50 mM Tris (pH7.5), 0.1 mM CaCl 2 , 2 mM MgCl 2 , 0.1 M NaCl, 0.2 mM PMSF). The column was washed with more than 20 volumes of TCMN, followed by 3 volumes of TCMN plus 0.5 M NaCl and a final wash of 3 volumes of TCMN. Proteins bound to the column were eluted with 2-3 volumes of 0.1 M glycine (pH 2.4) and immediately neutralized. The eluted proteins were examined on SDS-polyacrylamide gels and on immunoblots. Fluorescence Emission Spectra All fluorescence measurements were performed at 25°C on an SLM 8000C spectrofluorimeter (SLM Aminco, Urbana, IL) using a 1 ϫ 1-cm quartz cuvette containing 1.5 ml of sample extensively mixed. The excitation wavelength was set at 292 nm, and the emission spectrum was recorded from 300 to 410 nm with an emission scan speed of 1 nm/s. The bandwidths were 8 nm for both excitation and emission. The emission spectrum of 1 M protein in 50 mM MOPS (pH 7.0), 50 mM KCl, and 1 mM HEDTA was recorded. CaCl 2 was then added to the sample to a free Ca 2ϩ concentration of 100 M, and a second emission spectrum was recorded. Background fluorescence of the buffer in the absence of protein was subtracted from all of the spectra. Ca 2ϩ -dependent Membrane Association Native DrosNCa-Wild type Canton-S adult Drosophila heads were homogenized at 100 mg/ml in 10 mM NaCl, 25 mM Tris (pH 7.5), 2 mM MgCl 2 , 0.2 mM PMSF, 5 mM ␤-ME, 5 g/ml aprotinin, and 5 g/ml leupeptin containing either 2 mM EGTA or 0.2 mM CaCl 2 . Each homogenate was placed on ice for 10 min and then spun at 100,000 ϫ g for 30 min, 4°C. The resulting supernatant and pellet fractions were analyzed on immunoblot probed with AP6094 antibodies. Recombinant DrosNCa-Crude membranes were prepared from Canton-S Drosophila adult heads according to the method of Kobayashi et al. (1993). 200 ng of myr-rDrosNCa or non-rDrosNCa was reconstituted with 25 l of membranes at 10 mg of protein/ml in membrane binding buffer (MBB; 25 mM Tris (pH 7.5), 0.1 M KCl, 2 mM MgCl 2 , 0.2 mM PMSF, 5 mM ␤-ME, 5 g/ml aprotinin, 5 g/ml leupeptin) containing either 1 mM EGTA or 0.2 mM CaCl 2 . The suspensions were incubated at 37°C for 30 min and spun at 100,000 ϫ g for 30 min, 4°C. The supernatant and particulate fractions were analyzed as described above for native DrosNCa. Purification of Recombinant Bovine Rhodopsin Kinase-Rhodopsin kinase (RK) was affinity-purified from Sf9 cell extracts as described in Chen et al. (1995) with modifications. The RK eluate was dialyzed against Tris-HCl 20 mM (pH 7.5), 5 mM MgCl 2 . The purified enzyme was examined by SDS-PAGE and estimated to be more than 90% pure. The concentration of the protein was determined by the Bio-Rad protein assay. Rhodopsin Phosphorylation-Urea-washed rod outer segment membranes were prepared as described in Chen et al. (1995). The phosphorylation assay was performed according to the procedures of Chen et al. (1995) with some modifications. Purified recombinant RK and ureawashed membranes were mixed in 20 mM Tris-HCl (pH 7.5) under infrared illumination and kept on ice. Thirty seconds before a light flash that bleached 0.05% of rhodopsin, 5 l of the RK-rod outer segment suspension was added to 5 l of TCMN buffer containing [␥-32 P]ATP (800 dpm/pmol) and rDrosNCa or rBovRv. The final concentrations of rhodopsin, RK, and ATP were 20 M, 200 nM, and 500 M, respectively. The reaction was incubated for 60 min in the dark at room temperature and stopped by adding an equal volume of SDS-PAGE sample buffer. Reactions unexposed to light were used as dark controls. Each reaction was fractionated on a 12% polyacrylamide gel and autoradiographed. Rhodopsin bands were excised and counted. To quantify light-dependent phosphorylation, dark controls were substracted from the test samples. Myristoylated and Nonacylated Recombinant Drosophila Neurocalcin-To investigate and compare the biochemical properties of the fatty acylated and unmodified forms of DrosNCa, milligram quantities of myr-rDrosNCa and non-rDrosNCa were expressed in E. coli and subsequently purified. The masses of myr-rDrosNCa and non-rDrosNCa were 21,975.51 and 21,764.17 Da, respectively, as determined by electrospray mass spectrometry, and are consistent with the calculated masses of acylated and unmodified polypeptide products of the nca gene. The isolated proteins were examined by native and SDS-polyacrylamide gel electrophoresis. Under 1 The abbreviations used are: BovRv, bovine recoverin; ␤-ME, ␤-mercaptoethanol; GCAP, guanylate cyclase activating protein; non-rDrosNCa, nonacylated recombinant Drosophila neurocalcin; myr-rDrosNCa, myristoylated recombinant Drosophila neurocalcin; PMSF, phenylmethlysulfonyl fluoride; RK, rhodopsin kinase; PAGE, polyacrylamide gel electrophoresis; MOPS, 4-morpholinepropanesulfonic acid; IEF, isoelectric focusing; HEDTA, N-(2-hydroxyethyl)ethylenediamine-N,N-NЈtriacetic acid trisodium salt dihydrate. native conditions (Fig. 1A), the Ca 2ϩ -free (EGTA) forms of the two proteins migrated faster than their Ca 2ϩ -bound forms, and myr-rDrosNCa (m) migrated further than non-rDrosNCa (n). As shown in Fig. 1B, in denaturing SDS-PAGE conditions with the addition of ␤-ME, both proteins resolved as single bands that migrated more slowly in the absence of Ca 2ϩ than in its presence, and myr-rDrosNCa was slightly faster than non-rDrosNCa. When myr-rDrosNCa and non-rDrosNCa were subjected to SDS-PAGE without ␤-mercaptoethanol, the formation of dimers was detected (Fig. 1C). Under these nonreducing conditions, multiple forms of monomeric (arrows) and dimeric (arrowhead) DrosNCa were observed, and their mobilities differed from those of the ␤-ME-treated proteins (Fig. 1B). Considering that three cysteine residues are present in DrosNCa, these results suggest that rDrosNCa can form intra-and/or intermolecular disulfide bonds. Curiously, the distribution of the monomeric and dimeric species for myr-and non-rDrosNCa were different, suggesting that lipid modification could influence the formation of the disulfide linkages. Tryptophan Fluorescence-DrosNCa, like bovine neurocalcin, has two tryptophan residues located at positions 30 and 103 in EF1 and EF3, respectively (Teng et al., 1994); these two tryptophans are conserved throughout the recoverin family. The fluorescence change of myr-rDrosNCa upon binding Ca 2ϩ is very similar to the change observed for myristoylated recombinant bovine neurocalcin ␦ (Ladant, 1995). Compared with the Ca 2ϩ free form (HEDTA), the maximum fluorescence intensity of myr-rDrosNCa in 0.1 mM free Ca 2ϩ was approximately 2-fold higher, but the emission maximum of 335 nm was not significantly different ( Fig. 2A). This change in the amplitude of the signal was reversible and appeared to be Ca 2ϩ -specific because MgCl 2 did not induce any variation in the fluorescence spectrum of the protein (data not shown). The fluorescence of myr-rDrosNCa in its Ca 2ϩ -bound form was similar that of non-rDrosNCa. However, unlike myr-rDrosNCa, non-rDrosNCa did not show a substantial increase in fluorescence upon Ca 2ϩ binding (Fig. 2B), even when up to 1 mM free Ca 2ϩ was added (data not shown). In contrast, nonacylated bovine neurocalcin ␦ has been reported to undergo a 1.5-fold increase of its fluorescence when it binds Ca 2ϩ (Ladant, 1995). From these data, we infer that the Ca 2ϩ -induced variation of the fluorescence of myr-rDrosNCa is mainly due to a change in the position of its myristoyl group, which influences the environment of one or more of its tryptophan residues. Native Drosophila Neurocalcin-We have previously shown that recombinant DrosNCa can be myristoylated by N-myristoyl transferase (Teng et al., 1994). Since lipid modification has yet not been demonstrated to occur on Drosophila proteins, we wondered if native DrosNCa was acylated. To address this issue, we purified the protein from wild type Canton-S Drosophila adult head extracts by affinity chromatography using a column of affinity-purified AP6515 anti-rDrosNCa antibodies. Fig. 3A shows a sample of the fly head extract (lane 2) that was loaded onto the antibody column and the proteins recovered in the low pH elution (lane 3). A band was detected in the eluted fraction that migrated with an apparent mass of 22 kDa, the expected size for DrosNCa. To ascertain that the eluted 22-kDa band was native DrosNCa, we performed Western blot analyses using AP6094, a different affinity-purified anti-rDrosNCa antibody. Fig. 3B shows that the 22-kDa protein was recognized by AP6094 (lane 3) but not detected by the mouse antirabbit secondary antibodies alone (data not shown) and that it was depleted from the flow-through fraction after passage over the AP6515-Sepharose column (lane 2). Two-dimensional gel electrophoresis was performed to determine if native DrosNCa behaved like myr-rDrosNCa or non-rDrosNCa. As shown in Fig. 4A, myr-rDrosNCa resolved at a more acidic pH relative to non-rDrosNCa in the IEF dimension, and myr-rDrosNCa migrated slightly faster than non-rDrosNCa in the SDS-PAGE dimension as previously observed in Fig. 1. We found that in IEF gels having different gradients of ampholytes, myr-rDrosNCa consistently resolved at a more acidic pH relative to the 5.2 pI, 45-kDa standard (arrow), whereas non-rDrosNCa settled at a more basic pH. This result 1. Analyses of myristoylated and nonacylated rDrosNCa by gel electrophoresis. A, native gel analysis. 2.5 g of purified non-rDrosNCa (n) or myr-rDrosNCa (m), in the presence of 20 mM ␤-mercaptoethanol and either 3 mM EGTA or 1 mM CaCl 2 , was resolved on a 10% polyacrylamide gel. B, SDS-polyacrylamide gel electrophoresis in reducing conditions. 1.5 g of non-rDrosNCa or myr-rDrosNCa, boiled in SDS sample buffer containing 200 mM ␤-ME and either 3 mM EGTA or 1 mM CaCl 2 , was resolved on a 15% SDS-polyacrylamide gel. Molecular weight (MW) standards are shown, and their corresponding sizes are indicated in kDa. C, SDS-PAGE in nonreducing conditions. 25 g of recombinant protein was analyzed as described in B using SDS sample buffer without ␤-ME. All of the protein bands were visualized by staining with Coomassie. Using liquid chromatography-coupled electrospray mass spectrometry, the extent of myristoylation of the purified recombinant protein preparations was determined to be greater than 90%. is consistent with the fact that a fatty acyl modification on DrosNCa results in the loss of one positive charge in the ␣-amine group of the N-terminal glycine residue. The apparent pI values of both rDrosNCa proteins are close to the calculated pI of 5.1 for the deduced nca polypeptide product. Previously, we had reported that AP6094 weakly cross-reacted to rat hippocalcin. This observation raised the possibility that these antibodies would detect homologues or different isoforms of DrosNCa in fly preparations. We therefore performed Western analysis on proteins extracted from Drosophila adult heads that had been separated on a two-dimensional gel. As shown in Fig. 4B, only one spot was recognized by AP6094 in the immunoblot. Under these two-dimensional gel conditions, rat hippocalcin was clearly separated from myr-rDrosNCa and non-rDrosNCa (data not shown). Thus, these data suggest that AP6094 specifically recognizes a single antigen in Drosophila adult heads, that being native DrosNCa. To ascertain if the gel mobility of purified native DrosNCa was similar to either myr-rDrosNCa or non-rDrosNCa, mixtures of equivalent quantities of native and recombinant DrosNCa were separated on two-dimensional gels, transblotted, and probed with AP6094. Whereas non-rDrosNCa and the native protein resolved as two distinct spots (Fig. 4C), the immunoblot signal for myr-rDrosNCa and the native protein comigrated (Fig. 4D). From these results, we infer that DrosNCa is fatty acylated in vivo. Ca 2ϩ -dependent Translocation of DrosNCa-Studies on recoverin , hippocalcin (Kobayashi et al., 1993), and bovine neurocalcin ␦ (Ladant, 1995) have shown that the fatty acylated forms of these two proteins bind to membranes in high Ca 2ϩ concentrations. To determine if native DrosNCa undergoes Ca 2ϩ -dependent translocation, Canton-S adult heads were homogenized in 10 mM NaCl, Tris buffer with either 1 mM CaCl 2 or 2 mM EGTA and partitioned by centrifugation, and the resulting pellet and supernatant fractions were analyzed on immunoblots using AP6094 (Fig. 5A). In the presence of Ca 2ϩ , almost all of DrosNCa was found in the pellet, whereas 70% of the protein was present in the soluble fraction in the presence of EGTA. When a similar experiment was done using 100 mM NaCl, Tris buffer with EGTA, only 40% of DrosNCa was found in the supernatant (data not shown). The data reveal that native DrosNCa binds to membranes and/or cytoskeleton when the levels of free Ca 2ϩ are high and that its affinity for membranes increases with the ionic strength of its solvent environment. The translocation properties of DrosNCa are therefore similar to those of recoverin, hippocalcin, and bovine neurocalcin ␦. FIG. 3. Purification of native Drosophila neurocalcin. A, native DrosNCa was purified from a soluble extract of Canton-S adult heads by affinity chromatography using an AP6515 anti-DrosNCa antibody column. Fractions obtained from this purification scheme were subjected to SDS-PAGE on a 15% gel that was stained with Coomassie Blue. Lane 1, molecular mass standards (sizes indicated are in kDa); lane 2, 40 g of protein from the Canton-S adult head extract; lane 3, approximately 1 g of protein recovered in the low pH elution. B, Western blot. The following fractions were separated on a 15% SDSpolyacrylamide gel, transferred onto nitrocellulose, and probed with 0.2 nM AP6094. Lane 1, 40 g of protein from the soluble Canton-S head extract; lane 2, 40 g of protein from the flow-through fraction of the AP6515 column; lane 3, 4 l of the low pH eluate. Based on densitometric scans and comparison to signals of rDrosNCa standards, it was estimated that there was originally 12.5 g of DrosNCa in the 38 mg of protein in the Drosophila head extract, and approximately 1.7 g of DrosNCa was recovered in the low pH elution. This corresponds to a yield of 14% from the antibody column. FIG. 4. Two-dimensional gel electrophoresis. A, a mixture of 1.5 g of non-rDrosNCa (n), 1.5 g of myr-rDrosNCa (m), and 8 g of two-dimensional molecular mass standards was first resolved in an IEF gel and then separated on a 15% SDS-polyacrylamide gel; the proteins were stained with Coomassie Blue. B, two-dimensional Western blot of 40 g of total protein from a soluble fraction of Canton-S adult heads homogenized in 0.5% SDS and 5 mM EGTA. C, two-dimensional immunoblot of a mixture of 5 ng of non-rDrosNCa, 5 ng of purified native DrosNCa, and 8 g of two-dimensional molecular mass standards. D, a mixture of 5 ng of myr-rDrosNCa, 5 ng or purified native DrosNCa and 8 g of two-dimensional standards. The blots shown in B-D were initially stained with Ponceau-S to visualize the proteins, and subsequently probed with AP6094 primary antibody and a goat anti-rabbit HRP-conjugated secondary antibody and detected by using chemiluminescence. All of the two-dimensional gels were resolved in the presence of 2.5 mM EGTA. The involvement of the fatty acyl moiety of DrosNCa in translocation was investigated by comparing the membrane binding abilities of myr-rDrosNCa and non-rDrosNCa. Briefly, recombinant protein was reconstituted with washed Drosophila head membranes and centrifuged. The resulting pellet and supernatant fractions were examined by Western analysis (Fig. 5B). In the presence of 0.2 mM CaCl 2 , most of the myr-rDrosNCa was located in the pellet, whereas it was almost completely soluble in the presence of EGTA. In contrast, non-rDrosNCa was always found in the supernatant in the presence or absence of Ca 2ϩ . These results show that the association of recombinant DrosNCa to the particulate fraction is dependent on its fatty acyl modification. Ca 2ϩ -dependent Inhibition of Bovine Rhodopsin Phosphorylation-It has been shown that recoverin inhibits the in vitro phosphorylation of rhodopsin by rhodopsin kinase Chen et al., 1995;Klenchin et al., 1995) and that recoverin directly interacts with rhodopsin kinase in a Ca 2ϩdependent manner (Chen et al., 1995). Since the primary sequences of DrosNCa and BovRv are 54% identical and the two proteins have similar Ca 2ϩ -dependent membrane binding properties, we examined if DrosNCa can affect rhodopsin phosphorylation in vitro. Urea-washed bovine rod outer segment membranes were reconstituted with purified recombinant bovine rhodopsin kinase in the presence or absence of myr-or non-rDrosNCa, and light-dependent rhodopsin phosphorylation was analyzed. When 5 M of non-or myr-rDrosNCa were added in the assays, an inhibition of rhodopsin phosphorylation was observed in 0.1 mM Ca 2ϩ but not in the presence of 1 mM EGTA (Fig. 6A); 65 and 85% inhibitions were observed for nonacylated and myristoylated proteins, respectively, at 0.1 mM Ca 2ϩ . To analyze the potency of the Ca 2ϩ -dependent inhibition of rhodopsin phosphorylation by DrosNCa, we compared the activity of non-and myr-rDrosNCa to those of non-and myr-rBovRv. Increasing concentrations of the recombinant myristoylated or nonacylated forms of DrosNCa or BovRv were added to reactions in the presence of 0.1 mM Ca 2ϩ , and the percentage of inhibition of rhodopsin phosphorylation was determined. Surprisingly, DrosNCa showed inhibitory effect similar to that of BovRv. The plots of both myristoylated (closed squares) and nonacylated (open squares) DrosNCa were basically superimposed with those of myristoylated (closed circles) and nonacylated (open circles) BovRv, respectively (Fig. 6B). The EC 50 values were approximately 0.8 M for the myristoylated proteins and 4 M for the nonacylated proteins. We wanted to determine if the inhibition of rhodopsin phosphorylation was an activity exhibited by other Ca 2ϩ -binding proteins FIG. 5. Ca 2؉ -dependent translocation of DrosNCa. A, subcellular fractionation of native DrosNCa. Wild type Drosophila Canton-S heads were homogenized in 25 mM Tris (pH 7.5), 10 mM NaCl, 2 mM MgCl 2 , 5 mM ␤-ME, 0.2 mM PMSF containing either 2.5 mM EGTA or 0.2 mM CaCl 2 . The homogenates were centrifuged, and the resulting pellet (P) and supernatant (S) fractions were analyzed on immunoblots using AP6094 anti-DrosNCa antibodies. Densitometric scans indicated that 70% of DrosNCa was present in the supernatant fraction of the EGTA sample. B, membrane association of myr-or non-rDrosNCa. Two hundred nanograms of recombinant protein was reconstituted with washed Canton-S head membranes in membrane binding buffer solution containing either 2 mM EGTA or 0.2 mM CaCl 2 and centrifuged, and the pellet (P) and supernatant (S) fractions were examined by Western analyses using AP6094 antibodies. As a control, 200 ng of myr-rDrosNCa in solution containing 0.2 mM CaCl2 without Drosophila head membranes remained soluble under these conditions (myr-control). To ensure that the immunoblot signals observed in the samples were solely from recombinant DrosNCa, the absence of detectable native DrosNCa in the washed Canton-S head membranes (M) was ascertained. FIG. 6. Ca 2؉ -dependent inhibition of the phosphorylation of bovine rhodopsin. Phosphorylation assays were performed as described under "Experimental Procedures." The final concentrations of ATP, recombinant bovine RK, and urea-stripped bovine rod outer segment membranes in the reactions were 500 M, 200 nM, and 20 M, respectively. The reaction contents were separated on a 12% polyacrylamide gel and autoradiographed, and the rhodopsin bands were cut and counted. Light-dependent phosphorylation was determined by substraction of the Ϫlight control to the ϩlight sample. For normalization, the total number of phosphates incorporated into rhodopsin in the RK control (ϪrDrosNCa) was taken to be 100%, and rhodopsin phosphorylation in the presence of the test proteins was calculated relative to the control. A, autoradiograph showing the Ca 2ϩ -dependent inhibition of bovine rhodopsin phosphorylation by non-and myr-rDrosNCa. The phosphorylation reactions were performed in the presence of 1 mM EGTA (EGTA) or 0.1 mM CaCl 2 (Ca 2ϩ ) with or without 5 M of non-or myr-rDrosNCa. The RK band corresponds to autophosphorylated RK. B, the effect of increasing the concentration of myristoylated (closed shapes) and nonacylated (open shapes) rDrosNCa (squares) or rBovRv (circles) on rhodopsin phosphorylation. Calmodulin (open diamonds) and myristoylated recombinant GCAP-2 (closed triangles) were used as controls. The final concentration of Ca 2ϩ in these reactions was 0.1 mM. The data are the mean of at least three independent determinations, and error bars represent the standard deviation. related to recoverin and neurocalcin. We therefore examined the inhibitory activities of two other EF hand proteins: myristoylated recombinant GCAP-2, which has 29 and 38% identities with recoverin and neurocalcin, respectively; and calmodulin, which has 19 and 22% identities with recoverin and neurocalcin, respectively. In our reconstituted assays, both myr-rGCAP-2 and calmodulin displayed only weak inhibition of rhodopsin phosphorylation (Fig. 6B). DISCUSSION In this paper, we present two lines of evidence indicating that Drosophila neurocalcin is a Ca 2ϩ -binding protein that is fatty acylated in vivo. First, on two-dimensional gels, native DrosNCa comigrates with myristoylated rDrosNCa and is distinctly separated from nonacylated rDrosNCa. Second, the membrane binding properties of native DrosNCa are similar to those of myr-rDrosNCa; both proteins bind to membranes in high Ca 2ϩ conditions and become more soluble when free Ca 2ϩ levels decrease. This study therefore provides the first evidence of the existence of protein acylation in Drosophila. Definitive elucidation of the type of lipid modification(s) on DrosNCa in vivo will have to be obtained by mass spectrometry analyses. The Ca 2ϩ -dependent membrane association ability of rDrosNCa is greatly enhanced by fatty acylation. It is likely that DrosNCa translocation occurs by the Ca 2ϩ -myristoyl switch mechanism that has been proposed for recoverin Dizhoor et al., 1993;Hughes et al., 1995;Tanaka et al., 1995). In this model, the fatty acyl moiety of the Ca 2ϩ -free form of the protein resides in a hydrophobic pocket composed of aromatic and aliphatic amino acid residues distributed in the N-terminal half of the protein. The binding of Ca 2ϩ to recoverin induces the extrusion of its lipid moiety and N-terminal ␣-helix into solution, thereby unmasking the hydrophobic pocket; this conformation favors interactions with membranes, cytoskeleton, and/or a target factor(s). The fluorescence of rDrosNCa in the presence and absence of Ca 2ϩ is consistent with the Ca 2ϩ -myristoyl switch model. First, the variation of fluorescence of myr-rDrosNCa seems to be mainly due to a movement of its myristoyl group, since no major fluorescence change occurs upon Ca 2ϩ -binding to non-acylated rDrosNCa. Second, the fluorescence of myr-rDrosNCa in its Ca 2ϩ -bound form is very similar to that of the protein lacking the myristoyl modification. Taken together, these observations suggest that the observed fluorescence change of myr-rDrosNCa upon Ca 2ϩ binding is due to the movement of its fatty acyl moiety from a protein environment into solution. Function of Recoverin-like Ca 2ϩ Sensors-Like calmodulin, recoverin and its homologues appear to have descended from a common ancestor that had four EF hand sites. All of the recoverin-like proteins identified to date have sequence determinants in their N termini necessary for fatty acylation, a modification that appears to be important for their Ca 2ϩ -dependent translocation properties. The discovery of multiple subclasses of neuronal recoverin-like proteins suggest that these cognates perform specialized cellular functions. Several of these cognates are coexpressed in a given neuron. These EF hand proteins have varying affinities for Ca 2ϩ (Cox et al., 1994;Ladant, 1995;Chen et al., 1995;Klenchin et al., 1995;Palczewski et al., 1994;Dizhoor et al., 1995), suggesting that their regulatory activities would be performed in different ranges of Ca 2ϩ concentrations. It has been proposed that recoverin and S-modulin delay photorecovery by inhibiting the receptor-quenching activity of rhodopsin kinase at high Ca 2ϩ levels Chen et al., 1995;Klenchin et al., 1995). In contrast, GCAP-1 and GCAP-2 appear to promote photorecovery by stimulating membrane guanylate cyclase at low Ca 2ϩ concentrations (Palczewski et al., 1994;Dizhoor et al., 1995). Drosophila frequenin has been reported to stimulate bovine photoreceptor membrane guanylate cyclase in low Ca 2ϩ conditions (Pongs et al., 1994); however, DrosNCa does not exhibit this activity in similar in vitro assays. 2 In this paper, we report that DrosNCa is capable of inhibiting the phosphorylation of bleached rhodopsin in reconstituted assays in a Ca 2ϩ -dependent manner. We show that its potency is similar to that of recoverin and that its inhibitory activity, like that of recoverin, is enhanced by myristoylation. Our results on BovRv are in agreement with the findings of Chen et al. (1995) but in contrast to Calvert et al. (1995), who report that myristoylated and nonacylated BovRv have similar inhibitory potencies. It seems plausible that differences between our experimental procedures and those of Calvert et al. may account for the discrepancies in results. Three experimental differences stand out in particular: (i) in Calvert et al., 100% of the rhodopsin was bleached during the experiment, whereas we bleached only 0.05% of the rhodopsin; (ii) we used recombinant rhodopsin kinase that is produced using the baculovirus-Sf9 expression system, whereas Calvert et al. extracted the enzyme from bovine photoreceptors; and (iii), the methods of purifying rhodopsin kinase differed, e.g. the rhodopsin kinase preparations of Calvert et al. included transducin, arrestin, and 0.4% Tween 20, whereas our protein preparations were highly purified and did not contain detergent. We have found that detergent can interfere with the Ca 2ϩ -dependent association of rhodopsin kinase and recoverin. 3 Although our experiments do not explore the mechanism by which DrosNCa inhibits rhodopsin phosphorylation by rhodopsin kinase, the similarity between the inhibitory effects of DrosNCa and recoverin suggests that these two proteins act through the same mechanism(s). It has been shown that recoverin and rhodopsin kinase associate in a Ca 2ϩ -dependent manner and that this interaction is required for the inhibitory effect of recoverin on rhodopsin phosphorylation (Chen et al., 1995). One might therefore postulate that a direct interaction between DrosNCa and rhodopsin kinase is necessary for the inhibition of rhodopsin phosphorylation by DrosNCa. Further investigations are required to test this hypothesis. Ca 2ϩ -dependent inhibition of rhodopsin phosphorylation has also been observed using another recoverin cognate, hippocalcin, 3 but other recoverin-related proteins like GCAP-2 or the more distant calmodulin are only poor inhibitors in these phosphorylation assays. As such, the capacity of in vitro inhibition of rhodopsin phosphorylation appears to be a newly discovered property of a subset of recoverin-like proteins including recoverin, neurocalcin, and hippocalcin. This is further supported by recent evidence that neuronal recoverin-like proteins of the neuronal calcium sensor family inhibit in vitro phosphorylation of rhodopsin (De Castro et al., 1995). Rhodopsin kinase belongs to a subfamily of serine/threonine kinases that desensitize activated seven-transmembrane receptors involved in signal transduction. Cognates of rhodopsin kinase include mammalian ␤ARK1, ␤ARK2, and G proteincoupled receptor kinases 4 -6, as well as Drosophila G proteincoupled receptor kinases 1-4 (Cassill et al., 1991;Inglese et al., 1993). Given that recoverin-like proteins colocalize with homologues of rhodopsin kinase in neuronal tissues and that several cognates of recoverin inhibit rhodopsin phosphorylation in a Ca 2ϩ -sensitive manner, it is tempting to speculate that members of these two protein subfamilies interact to confer Ca 2ϩdependent attenuation to seven-transmembrane receptor quenching events. However, at least two issues raise questions about the ability of recoverin-like proteins to control receptor activity in vivo. First, recoverin and its cognates appear to be exclusively expressed in neuronal tissues, whereas GRKs and seven-transmembrane receptors are present in non-neuronal cells as well. Second, the documented data on the inhibitory effects of recoverin and its cognates are based on in vitro studies using mammalian rhodopsin kinase. If the recoverin and rhodopsin kinase subfamilies interact as proposed, one should be able to demonstrate that recoverin-like proteins also inhibit ␤ARK1, ␤ARK2, and/or G protein-coupled receptor kinases 4 -6. So far, Chen et al. (1995) have found that recoverin cannot inhibit ␤ARK1 in reconstituted assays. Neuronal signaling is attenuated by mechanisms involving the universal second messenger Ca 2ϩ . Recoverin and its homologues probably function as Ca 2ϩ -sensing regulators in neuronal processes, but their precise mode(s) of action remains unresolved. It is plausible that each recoverin-like protein differentially controls a receptor kinase. Alternatively, it is conceivable that these EF hand proteins, like calmodulin, may interact with multiple targets. Indeed, multiple proteins have been reported to associate with recoverin and neurocalcin in a Ca 2ϩ -dependent manner (Chen et al., 1995;Okazaki et al., 1995). Further studies are therefore required to elucidate the physiological roles of this diverse class of Ca 2ϩ -binding proteins.
2018-04-03T04:52:36.405Z
1996-04-26T00:00:00.000
{ "year": 1996, "sha1": "a4eaf4a6026781d751de506d884d1de18835f76f", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/271/17/10256.full.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "1aedbce7f33088bfb18423bccd2a860d3712cb20", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
208123017
pes2o/s2orc
v3-fos-license
Joint method using Akamatsu and discrete wavelet transform for image restoration Current technology makes it easy for humans to take an image and convert it to digital content, but sometimes there is additional noise in the image so it looks damaged. The damage that often occurs, like blurring and excessivenoiseindigitalimages,cancertainlyaffectthemeaningandqualityofthe image.Image restorationis a process used to restore the image to its original state before the image damage occurs. In this research, we proposed an image restoration method by combining Wavelet transformation and Akamatsu transformation. BasedonpreviousresearchAkamatsu ’ stransformationonlyworkswellonblurredimages.Inordernottofocus solelyonblurryimages,Akamatsu ’ stransformationwillbeappliedbasedonWavelettransformations onhigh-low (HL), low-high (LH), and high-high (HH) subunits. The result of the proposed method will be comparable with the previous methods. PSNR is used as a measure of image quality restoration. Based on the results the proposed method can improve the quality of the restoration on image noise, such as Gaussian, salt and pepper, and also works well on blurred images. The average increase is around 2 dB based on the PSNR calculation. Introduction An image is a discrete representation of data that has layout information and the color intensity of an object [1]. Nowadays, an image can be easily converted into digital content, but sometimes there is additional noise in the image during camera capture so that the image appears damaged. This operation of repairing damage is called image restoration. Damage can occur when shooting can be caused by the quality of the digitizer, poor camera focus, and signal noise from sensors, so that noise or image appears blurry [2]. Image restoration is the process to obtain a clean original image of the damaged image or that exposed to noise [3,4]. Karungaru et al. [5] used Akamatsu transformation to restore the image. Akamatsu transform is a transformation technique developed by Norio Akamatsu. Akamatsu transform produces a differential value showing the characteristics of the image and by scaling between the integral and differential values can perform image restoration. From this integral and differential values, there is a scale adjustment between the integral and the differential values to obtain the image of the restoration result. The result of restoration with this method is very good only on the blurred image because it can increase the energy in the image, but produce side effects as other noises appear sharper. Megha et al. [6] proposed Adaptive Histogram Equalization and DWT techniques to restore the image. The first step of image restoration is done by decomposing using Discrete Wavelet Transform to get four low-low (LL), low-high (LH), high-low (HL), and high-high (HH) sub-domains. Sub-band LL is selected then copied to LL2 to be applied to Adaptive Histogram Equalization. Sub-band LL and LL2 subsequently decomposed using SVD to produce U1, S1, V1 and U2, S2, V2. Then the calculation of both SVD decomposition to determine the improvement factor. The results of this restoration method are good for removing noise but making the energy in the image is reduced. SVD is a transformation that has been widely used in digital image processing [10][11][12]. SVD is a technique for obtaining geometric features of an image. Basically, SVD is used to handle matrices that do not have an inverse. The SVD has several characteristics that the singular value change does not significantly alter the overall image and the singular value contains illumination information, while the singular vector retains the information from the image. Wavelet Transform is a technique for stable signals and not stationary. Because of its efficiency in analyzing local discontinuities of a signal, the Wavelet Transform has been widely used in various disciplines [13][14][15][16]. DWT is one form of Wavelets Transform in which the image signal is passed through a pile of analysis filters followed by a subtraction operation. DWT decomposes images into 4 sub-domains: low-low (LL), low-high (LH), highlow (HL), and high-high (HH). LL is the low-frequency signal which is the approximation of the original image. LH and HL are intermediate frequency signals. And HH is a highfrequency signal. LH, HL, and HH represent changes in information in the image [13,17]. So DWT is very good at representing a signal with singularity. However, DWT is not suitable for representing fine image signals and colored noise so it needs to be combined with other methods to improve image restoration results [18]. Akamatsu transform proved excellent for restoring the image blur due to produce a different value which is characteristic of the image. This value will be strengthened and can reduce the blur in the image. While the use of low sub-band on wavelet transforms can reduce noise but make significant energy changes in the image. Based on this background, this research proposes to combine DWT and Akamatsu to get more optimal results of image restoration. Sub-bands LH, HL, and HH were chosen to apply Akamatsu transformation. To test the quality of restoration method will be given various image noise such as Gaussian noise, salt & pepper, and blur. The result of the restoration will be calculated by the PSNR and compared with the method contained in the research Karungaru et al. [5] and Megha P et al. [6]. Akamatsu and wavelet transform 2.1 Akamatsu transform Akamatsu transformation is a transformation that produces the integral and differential values of an input signal [5,19]. Akamatsu transformation requires a target signal that is ACI defined as Pðx; yÞ Where the target signal is in the transformation. With the presence of the target signal, the result of the transformation will be divided into two parts, namely the right of the target signal and the target signal left. The signals to the right of Pðx; yÞ are defined as VRSðxÞ and left as VLSðxÞ. The target signal is used to determine the result of Akamatsu transformation. To see more clearly the description of the target signal on Akamatsu transform can see Figure 1. To achieve the target signal required the calculation of the integral value and the differential value. To get an Integral value can use Eq. (1). While the first order of the integral value of the Akamatsu transform is defined by Eq. (4). Thus, the member values of VRSðxÞ and VLSðxÞ can be written by Eqs. (5) and (6). Pðx þ kÞ (7) and the sum of VLSðxÞ, defined as S L which can be calculated by Eq. (8). Having obtained the value of Integral it can be obtained deferential value. The differential value ðDÞ is obtained by calculating the difference of the original pixel value with the integral value found in Eq. (9). Joint method for image restoration ½DðxÞ 1st ¼ PðxÞ À ½AðxÞ 1st (9) In previous research, the Akamatsu transformation was calculated in the x-direction. But in this research Akamatsu is also applied in the y-direction. Akamatsu transforms applied in y-direction also divides the signal into two parts corresponding to the target signal Pðx; yÞ. The section located above the Pðx; yÞ is called VTSðyÞ, which can be calculated by Eq. (10) and the part that is under the Pðx; yÞ is called VBSðyÞ, which can be calculated by Eq. (11). With the first known order from Akamatsu transform then it can be continued to the next order with the above equation as many as n order, so that is obtained Eq. (15) to calculate Akamatsu transform. where up enh is an increasing value to increase the value of the differential waveform to obtain a clearer image. While down enh is increasing value to decrease the integral waveform value so that the new pixel value does not exceed the 255 limit value. Discrete wavelet transform The use of wavelet transforms has been widely used in digital image processing and pattern recognition [20]. This technology also takes an important role in digital image processing. Decomposition using wavelet transform produces two-dimensional functions of time and space. Wavelet decomposition is also helpful in recognizing singularity details by analyzing the time frequencies of non-stationary signals [21][22][23]. Furthermore, outside there have been ACI many forms of developed wavelet transforms and each has different characteristics. The discrete wavelet transform is one form of the wavelet transform, such as complex wavelet transform (CWT), Slantlet transforms (SLT) [23,24]. DWT has many good features such as multi-resolution capabilities and space-frequency localization property that makes DWT can be applied to the entire image and can be reconstructed in several image size [25]. In DWT, images are divided into 4 sub-bands: Low-Low (LL), Low-High (LH), High-Low (HL), High-High (HH). The subband distribution process is carried out with two kinds of filters namely low pass and high pass filter. There are various kinds of filters that can be used such as Daubechies and Haar which are the most popular filters [26][27][28]. Haar filters have the advantage of being simple in structure so that they can reduce memory usage and are preferred for analyzing compact discrete signals [27]. Each sub-band holds important information about the image. LL is representative of the image and is the sub-band most similar to the original image before it is decomposed. LH, HL, and HH are wavelets of variations vertically, horizontally, and diagonally [6,13,21,25]. Decomposition on DWT can be done using Eqs. (18) and (19) [29]. Ψ i j;m;n ðx; yÞ ¼ 2 where: i 5 wavelet {HL,LH,HH}, m; n 5 size of image Figure 2 is a description of sub-bands decomposition proccess using in DWT using filter Haar.where 2↓1: downsample columns and 1↓2: downsample rows. Figure 2 shows the decomposition process at the first level. In its development, decomposition is also done on several levels to get the most appropriate results. But in this research decomposition is only done at one level. Where later each subband will be carried out further different processes, according to the characteristics of each subband. While reconstruction is done by inverse DWT using Eq. (20). Proposed restoration scheme The proposed restoration scheme in this study is to combine DWT and Akamatsu on the input image. The input image is an image that has been given manipulations such as Joint method for image restoration Gaussian noise, salt and pepper, and blurring. To be able to see more clearly the stages of the proposed scheme could see Figure 3. Here is a detailed stage of the proposed restoration process in Figure 3: 1. Decompose the input image using DWT to get 4 subband LL, HL, LH, and HH. 2. Select subband HL, LH, and HH to apply Akamatsu transformation. 3. Apply y-direction Akamatsu transform on subband LH to get a new LH subband. 4. Apply x-direction Akamatsu transform on subband HL to get a new HL subband. 5. Apply two Akamatsu transforms on the HH subband. First, apply Akamatsu transform x-direction and the second Akamatsu transform y-direction. To obtain a new HH subband get the average value of both Akamatsu transforms with Eq. (21). 6. Perform a DWT inverse to get the image of the restoration. Implementation and results In this study, the proposed method will be simulated using five grayscale images with 512x512 size. This image is a standard image that can be downloaded on the internet. After the image is downloaded, the image is immediately used without preprocessing. In this study, the whole trial process uses Matlab R2015a software. The image used is shown in Figure 4. To test the proposed method, three kinds of manipulations are a blur, Gaussian noise, and salt and pepper using Matlab function. Figure 5 shows a sample that has been given manipulation. Peak to Signal Noise Ratio (PSNR) is used to determine the degree of damage to the noised image. The value of PSNR is obtained by comparing the original image and the noised image. Equation (22) is used to calculate PSNR. where: h; g is the size of the image, O i is an original image, and N i is a noised image. Table 1 shows the PSNR value of the original image after given three kinds of noise models. Furthermore, the restoration of the noised image with the proposed method. The proposed method combines two methods, namely wavelet transform with Haar filter and then transformed again with Akamatsu transform. Wavelet transform is performed to get four subbands, where three of the four subbands are further processed using Akamatsu transform. The processing of the Akamatsu transformation in each DWT subband is adjusted to the characteristics of each subband to optimized restoration results. Perform the Joint method for image restoration Akamatsu y-direction transform on the LH subband, the x-direction on the HL subband and the xy-direction on the HH subband. This is done because the LH subband has a middle frequency horizontally, HL has a vertically middle frequency and HH has a diagonal high frequency [6,13,21,25]. Table 2 shows the results and PSNR values of the reconstructed images from the proposed restoration method. Comparative and analysis To evaluate the performance of the proposed method, the restoration of the proposed method were compared with the results of method restoration on the research of Karungaru et al. [5] and Megha et al. [6]. In this research, both methods have been replicated to compare the results of image restoration. This is done because the data set of images used are different, so it needs to be replicated so that it can be compared with the same image dataset. Comparative measurements were made with PSNR shown in Table 3. As can be seen in Table 3, the PSNR value of the proposed method of restoration appears better than the two previous methods. Visually the difference of image result of restoration can be seen in Figures 6-8. Based on the PSNR values contained in Table 3, proves that the results of image restoration by the proposed method appear superior to the previous method. Visually, Figures 6-8 also show that the proposed method appears to be superior. Only, in the blur, the results of restoration by the method [5] appear more clear and sharp when compared with the proposed method. However, the result of restoration on the method [6] creates new noise in the form of lines on the edge of the image, as shown in Figure 6(a). Conclusions Based on the results of the analysis and comparison, it can be concluded that the proposed method has better restoration compared to the method [5] and method [6] in all types of manipulations. The restoration of Gaussian noise and salt and pepper manipulations also reduce and smooth the noise generated. While the restoration using the method [5] even causes a blur effect. This happens because the Akamatsu transformation has characteristics that can change the image information by changing the differential value with the down enh value. While the proposed method applies changes to this information on sub-band HL, LH, Joint method for image restoration and HH. So after the inverse DWT, the original image characteristics did not change significantly but can eliminate the noise that occurs in the image. Method [6] has a different character, although noise is reduced, the energy in the image of the restoration in the image is also reduced so that the image appears darker and the resulting PSNR value is less good. So it can be concluded that the proposed method can work better on various manipulations and can maintain the value of image approximation. In future studies, several wavelet filters can Image Noise type Proposed method The method in [5] The method in [ Table 3. Comparison of PSNR value proposed method, the method in [5] and the method in [6].
2019-10-17T08:59:05.586Z
2020-08-03T00:00:00.000
{ "year": 2020, "sha1": "29c6affe36ea6cbc155188a217dd023cf40d5d66", "oa_license": "CCBY", "oa_url": "https://www.emerald.com/insight/content/doi/10.1016/j.aci.2019.10.002/full/pdf?title=joint-method-using-akamatsu-and-discrete-wavelet-transform-for-image-restoration", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "6931f3d3c5697164d50d3c70a224e6f149b322f5", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
135696021
pes2o/s2orc
v3-fos-license
Analysis of Granules Behavior in Continuous Drum Mixer by DEM A numerical simulation model was developed to analyze the behavior of iron ore granules in a continuous drum mixer by Discrete Element Method (DEM). The effects of the gradient angle of the drum mixer on the behavior of granules were investigated using the simulation. The granulation experiment of iron ore fine was also performed to observe the occupation ratio and retention time. The occupation ratio and retention time obtained by both the simulation and experiment decrease with an increase in the gradient angle. The effect of the length of the drum mixer on the behavior of granules was investigated using the DEM simulation. The retention time increases with an increase in the length of the drum mixer. If the occupation distributions in the direction of the rotational axis is based on the exit of the drum mixer, it will be the same regardless of the length of the drum mixer. This simulation model proposed will be useful to understand granules behavior and to design of granulators. Introduction The granulation of iron ore is an important process in the iron ore sintering, which supplies main iron burden to the blast furnace for ironmaking. It strongly affects the productivity and properties of produced sinter such as strength and reducibility. Since recent rapid increase in the world steel production has accelerated the deterioration of composition and property of iron ore, importance of the granulation process grown significantly. In general, the granulation is made by using a drum mixer or pan pelletizer with adding sprayed water to the mixture of fine iron ore and fluxing materials such as limestone and burnt lime. In order to properly control the granulation process, it is necessary to understand the motion of the materials in the mixer/granulator. Several studies have performed so far to predict granulation growth rate theoretically [1][2][3][4] and experimentally. [5][6][7][8] In addition, the effects of rotational speed and particles charge ratio on the movement of particles have been investigated theoretically. 9,10) However the granulation phenomenon is not yet understood fully because it is very complicated and there are lots of parameters related to the granulation phenomenon such as a rotational speed of drum mixer, diameter of drum mixer, length of drum mixer, gradient angle and so on. Since the computer simulation has a possibility to provide significant information in detail on granulation process, we have tried to develop the simulation model of granules behavior the batch type drum mixer 11) using Discrete Element Method (DEM). [12][13][14] This paper describes the numerical simulation method for the motion of the granules in a continuous drum mixer by using DEM and its validity is discussed in term of the occupation ratio and retention time. The effects of drum length and angle on the behavior of granules are also examined. Discrete Element Method (DEM) The granules behavior in the continuous drum mixer simulated by Discrete Element Method (DEM), which is one of the most reliable and popular computer simulation methods for particles behavior. [15][16][17][18][19][20][21][22][23] The interaction forces in collision between two particles are represented by the Voigt model as shown in Fig. 1, which is composed of a spring-dashpot and a slider for the friction in the tangential motion. Modeling of Granules Prior to applying the DEM to granulation process, the A numerical simulation model was developed to analyze the behavior of iron ore granules in a continuous drum mixer by Discrete Element Method (DEM). The effects of the gradient angle of the drum mixer on the behavior of granules were investigated using the simulation. The granulation experiment of iron ore fine was also performed to observe the occupation ratio and retention time. The occupation ratio and retention time obtained by both the simulation and experiment decrease with an increase in the gradient angle. The effect of the length of the drum mixer on the behavior of granules was investigated using the DEM simulation. The retention time increases with an increase in the length of the drum mixer. If the occupation distributions in the direction of the rotational axis is based on the exit of the drum mixer, it will be the same regardless of the length of the drum mixer. This simulation model proposed will be useful to understand granules behavior and to design of granulators. Analysis of Granules Behavior in Continuous Drum Mixer by DEM model for granulation should be discussed. There are two models for the simulation of granulation process. One is trace movement of all particles. The other is trace movement of granulated particle only as one particle. The former might be better to analyze granulation mechanism. However, it is not realistic to simulate them because particles traced are enormous and it would take a long time to simulate the behavior of granules. Therefore the latter is adopted in this work. The assumptions in this simulation model are follows; 1) Granules are treated as one particle. 2) Particle shape is sphere. 3) Particle diameter is uniformly. 4) Particle diameter and other parameter is fixed in the simulation. 5) Effect of moisture is ignored. Physical constants and simulation conditions are shown in Table 1. Figure 2 shows the schematic diagram of a continuous drum mixer used in the simulation. Table 2 lists the size of drum mixer. Drum length and gradient angle were changed to investigate their effects on the granule behavior. The granules are fed at the extreme right of the drum mixer and granules drop from the extreme left (Fig. 2). Feeding rate is fixed at 100 kg/min. Determination of Parameters The simulation parameters are needed to be determined so that the granules behavior simulated correspond to the experimental results. Particularly the frictional coefficient has to be determined carefully since it strongly affects granules behavior. The simulation of granules behavior was per-formed to investigate the frictional coefficient on the behavior. The snapshots are shown in Fig. 3. The granules behavior is influenced by the frictional coefficient and a rising angle increases with an increase in the frictional coefficient. The rising angle was defined as shown in Fig. 4(a) to evaluate the granule behavior and to compare to experimental results. Figure 4(b) shows relationship between the friction coefficient used in the simulation and the rising angle. The rising angle obtained from the experiment was also shown in Fig. 3. The rising angle increases with an increase in the frictional coefficient until the coefficient is 0.5, and then, the rising angle is going to be constant value about 100 degrees. The frictional coefficient was determined as 0.7, at which the rising angle obtained from the simulation is the closest to experimental one, although the rising angle obtained from the simulation does not agree with experimental results completely. This difference between them could be due to the assumption of sphere in the granules. The reason of this difference will be investigated more detail in the future. Experiment Granulation experiment has been performed by using the continuous dram mixer. Granulator consists of rotating drum mixer, water nozzle and feed hopper (Fig. 5). The raw materials used in the actual process are supplied continuously at the extreme right of the drum mixer. Water is sprinkled by a nozzle, and granules are drained from the other end continually. The occupation ratio of iron ore in the drum mixer was measured, and also retention time were measured by mixing ZnO powder. Figure 6 shows snapshots of granules motion in the drum mixer (Qϭ100 kg/min, aϭ1.8 deg., f 60ϫ300 cm). The empty drum mixer was prepared as shown in Fig. 6(a), and then the granules are supplied at the extreme right continuously (Fig. 6(b)). They are moving to left (Figs. 6(c)-6(e)). A small amount of granules drops from the left edge (Figs. 6(d), 6(e)). The number of granules in the mixer increases little by little and the discharging rate of granules is going to be the same as the feeding rate. Finally the number of granules in the drum mixer is going to be constant (Fig. 6(f)). Figure 7 shows number of granules in the drum mixer with time. The number of granules goes up rapidly in the initial stage, and then it is saturated in any cases. This state is called steady state. The number of granules in the drum mixer decreases as the gradient angle of the drum mixer increase. The time reaching the steady state becomes long with a decrease in the gradient angle of the drum mixer. The granules behavior in the term shown in the heavy line was analyzed. Figure 8 shows the occupation ratio distributions in the direction of the rotational axis. Both simulation and experimental results were shown in Fig. 8. Regarding the experiment, in cases of higher gradient angle, it is seen that the occupation ratio is constant in the range from 50-300 cm, and after this it decreases gradually toward the exit. On the other hand, in cases of lower gradient angle the occupation Fig. 8. Distributions of occupation ratio in the direction of rotating axis (Qϭ100 kg/min, nϭ20 rpm, 300 cm). ratio gradually decreases from the feeding point. The same tendency can be seen in the simulation although the little differences between the simulation and experimental results could be seen quantitatively. The difference might be due to the assumption of non-change in granules size. Actually granule size is smaller in the feeding point and the granules are growing little by little toward the exit in the drum mixer. Effect of Gradient Angle In general the flowability of smaller granules/particles is lower and that of larger granules is higher. Therefore, the occupation ratio in the experiment is larger than that in the simulation in the range of 50-300 cm and then experiment is smaller than that in the simulation in the range of 0-50 cm. Figure 9(a) shows relationship between (average) retention time and gradient of angle. Higher gradient angle gives longer retention time. Figure 9(b) shows retention time distribution as a parameter of gradient angle of the drum mixer. The retention time distribution is narrow when the gradient angle is low. As gradient angle becomes high, retention time distribution is going to be wider and shift to longer. The retention time obtained by the experiment was also shown in Fig. 9(b). The retention time obtained by simulation almost agrees with experimental results. Figure 10 shows number of granules in the drum mixer with time as a parameter of drum mixer length. The number of granules increases quickly in the initial stage and then goes to be constant value. It takes longer time to reach the steady state when using longer length of the drum mixer. The granules behavior in the term shown in the heavy line was analyzed. Figure 11 shows snapshots of the front view and side view of granules motion in the steady state in the drum mixer. No significant difference can not be seen according to the front view even if drum mixer length become long. While, the granule bed heights and its shapes are the almost the same in any cases. Figure 12 shows the occupation ratio in the direction of rotational axis as a function of the distance from the exit which is a parameter of the length of the drum mixer. The occupation is getting smaller toward the exit from the feeding point and rapidly decreases near the exit. The difference in the occupation ratio distributions can not be seen even if the drum mixer becomes longer. These results could be useful for optimum design of the drum mixer. Figure 13(a) shows relationship between drum length and retention time. The retention time could be proportional to the drum length of the drum mixer. The simulation result is not so far from the experimental results. Figure 13(b) shows retention time distribution with a parameter of the drum length. As the drum mixer length is shorter, the distribution of the retention time becomes narrower, and the retention time shifts toward shorter. Conclusion In order to analyze the motion of the raw materials of iron ore during granulation with a continuous drum mixer, a numerical simulation model was developed using the Discrete Element Method (DEM). Validity of the model was verified by comparing to the experimental results in terms of the occupation ratio and retention time of the granules. The simulation model will be useful to examine the granulation behavior of the raw materials, optimize the granulation parameters and design the efficient granulating equipments such as drum mixer and pan pelletizer.
2019-04-28T13:09:08.863Z
2009-05-15T00:00:00.000
{ "year": 2009, "sha1": "e2af7f0c5d967f0df3a6ff8256c0a45fa4ae9006", "oa_license": null, "oa_url": "https://www.jstage.jst.go.jp/article/isijinternational/49/5/49_5_645/_pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "f2666a70b9075a40212bbcf5e385d1cf6b1dddc8", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
13063771
pes2o/s2orc
v3-fos-license
Choose wisely: the quality of massage education in the United States. BACKGROUND Assessing the quality of postsecondary education remains a difficult task, despite many efforts to do so. No consensus or standard definition of educational quality has yet been agreed upon or developed. PURPOSE This study evaluated the quality of massage education in the United States using three closely-related questions to frame the evaluation: 1) Is accreditation improving the quality of education for massage therapy? If not, then what do we need to do to improve it? 2) Does accreditation by COMTA specifically improve quality of education compared to other vocational accrediting agencies that do not require curriculum competencies specific to massage? 3) Would adding competencies at an "advanced" level, or specific degree levels, be helpful in advancing massage therapy in the eyes of other health professions? SETTING United States. PARTICIPANTS Members of a national massage education organization, members affiliated with the educational arm of two national professional associations, and members of two national education organizations in complementary and integrative health care (CIHC). RESEARCH DESIGN MIXED METHODS EVALUATION USING THREE DATA SOURCES: existing gainful employment data from the US Department of Education, analyzed by type of massage program and accreditation agency to determine average and relative value for cost; numbers of disciplinary actions against massage practitioners reported by state regulatory agencies, and a qualitatively developed survey administered to two different groups of educators. RESULTS Average tuition cost across all reporting schools/programs was $13,605, with an average graduation rate of 71.9%. Of the schools and programs that reported student loan data, 84% of students received federal financial aid. Median loan amount was $8,052, with an average repayment rate of 43.4%. Programs in corporate-owned schools had the highest average cost, highest median loan amount, and lowest repayment rate, while community college programs had the lowest average cost, lowest graduation rate, and lowest median loan amount. Repayment rate data were not available for community colleges. Of the five states and the District of Columbia that require school accreditation, there were 208 disciplinary actions from 2009-2011. The remaining 28 regulated states that do not require school accreditation reported 1,702 disciplinary actions during the same period. Seventy-five percent of massage educators and 58% of CIHC educators stated that the current quality of massage education is inconsistent, with only 10% of massage educators and 8% of CIHC educators agreeing that current educational quality is adequate. Fifty-six percent of massage educators and 40% of CIHC educators agreed that educational quality needs to improve if massage therapists want to be considered comparable to other allied health professionals. Both groups suggested specific areas and means of improvement, including raising admission requirements and offering an academic degree. CONCLUSIONS Accreditation appears to improve the quality of massage education; however, more consistent methods for calculating tuition costs, educational outcomes, and classifying severity of disciplinary actions are needed. Both quantitative and qualitative evidence indicates that the current quality of massage education in the US is inconsistent and less than adequate. Specific areas of improvement needed for massage therapists to be perceived as comparable to other allied healthcare providers are described. and the size or scope of physical facilities such as libraries, but also on educational outcomes such as graduation rates, time to degree completion, and job placement rates. Job placement and debt repayment rates especially have assumed increased scrutiny, given the high cost of post-secondary education. Education cost is a popular and controversial topic currently, as more post-secondary students graduate with significant loan burdens (8) . For-profit corporate colleges and schools, some of which offer massage therapy programs, have recently been the subject of increased criticism by federal agencies (9) and by student consumers themselves (10) . While the majority of for-profit massage schools are proprietary, privately owned by individuals, corporate-owned schools and career and technical colleges graduate a disproportionate number of new practitioners. According to a 2013 Associated Bodywork and Massage Professionals (ABMP) report, corporate massage schools represented 5% (60) of the estimated 1,319 programs, but graduated 14% of all students-almost as many as the accredited proprietary schools (145) that constituted 11% of all programs and graduated 19% of the estimated 39,000 students. In contrast, non-accredited schools (541) constituted 41% of all programs, but graduated only 34% of students. Enrollment numbers by category of school provide some explanation for this trend. The average number of students enrolled per school for corporate schools is 84, compared to an average of 31 students per school for all proprietary schools (accredited and non-accredited combined), almost three times as many (11) . Massage therapy Education As a discipline, massage therapy currently stands at an uneasy crossroads of vocational training and academic post-secondary education, as evidenced by the variety of educational institutions that offer training programs in massage therapy. These range from purely vocational programs offered at career and technical training schools to two-year associate degrees offered through community colleges. Some universities that train doctors of chiropractic, acupuncture and Oriental medicine, and naturopathy also offer both certificate and associate degree programs in massage. There is even a new four-year bachelor degree program in massage therapy offered at Siena Heights University, where applicants can receive academic credit for having passed the National Certification Examination. (http://www.sienaheights.edu/ LandingPages/MassageTherapy.aspx) A longstanding tension exists between those who view massage education as strictly vocational and want to have it remain so, focused on training students to provide a personal service, while others see it as an integrative health care discipline similar to acupuncture and other complementary and Dew (5) points out that much of the confusion in defining educational quality stems from the simultaneous use of very different frameworks to describe it. These are quality as endurance, quality as luxury or prestige, quality as conformity to requirements, quality as continuous process improvement, and quality as value added-we expect that those completing any educational program to have gained demonstrable skills or knowledge as a result. The most relevant frameworks for evaluating the quality of massage education from an accreditation perspective are: endurance, as it applies directly to the financial stability of an institution; conformity to requirements, as it applies to meeting accepted educational standards; value added, which can be evaluated by metrics such as graduation rates, employer placement rates, and pass rates on licensing examinations; and process improvement, as reflected in the institutional selfstudy. The self-study process typically combines and documents elements of all these frameworks. It is important to distinguish between the role of quality in accreditation, which focuses on setting base standards that organizations must meet to be considered acceptable providers of education services, and quality as a 'stretch' goal of achieving educational excellence, which individual institutions may attempt to achieve for a variety of purposes. The Baldridge National Quality Awards in health care and education (6) are examples of the latter, while the Commission on Massage Therapy Accreditation (COMTA) accreditation standards exemplify the former. The self-study process that most educational accreditation organizations employ can serve not only as a summative evaluation of how well a program meets basic requirements, but also as a formative means to build a blueprint for excellence, through identifying potential areas of improvement. This formal written self-evaluation of an institution's compliance with established educational standards not only provides documentation of its strengths and weaknesses as identified by multiple stakeholders such as faculty, students, and alumni, but is ideally a reflective process allowing administrators to consider from a broad perspective how well the institution is meeting its own goals and mission. The framework of quality as process improvement and the related concept of quality management has received a great deal of attention since its widespread implementation into American businesses during the 1990s. The concept of total quality management (TQM) has been applied to education, most notably by Edward Sallis (7) . In attempting to apply quality management to education, however, Sallis proposes a compelling reason for why TQM should be applied to education, and that is accountability. Accountability may be one reason for the current trend in assessing educational quality through focusing not only on traditional 'input' measures, such as teacher-student ratios, teacher credentials, integrative therapies. Among the states that regulate the practice of massage therapy, it is more often as a health profession rather than a personal service. The rapid growth of massage therapy in the larger context of the integrated health care movement by consumers has also contributed to the profession's ongoing identity crisis. According to a recent industry survey, consumer use of massage for health and medical reasons is increasing annually, as are referrals from physicians and other health care providers (12) . As massage became more widely used by US consumers in the 1990s, the massage therapy industry grew, as well. The numbers of educational programs and practitioners increased rapidly, from an estimated 180,880 practitioners in 2000 to 307,104 practitioners in 2012, a 58% increase (12,13) . The number of massage programs showed a comparable increase, from just over 600 in 2000, to 1440 in 2011 (14) . The recession of 2008 together with market saturation has cooled these trends to some extent, which has been documented through periodic surveys by two major professional associations of massage therapists. Currently, massage education programs are in a state of flux that reflects concerns and discussion regarding educational quality within the profession, as demonstrated by the development and recent publication of the Entry Level Analysis Project (ELAP) (15) . The impetus for the ELAP project was the perceived inconsistency of quality, depth, and focus in entry-level massage therapy education by national leaders from a number of professional organizations, including the Alliance for Massage Therapy Education (AFMTE), the American Massage Therapy Association ( This evaluation adds to the discussion on massage education quality and it is focused on three broad objectives: 1) Is accreditation improving the quality of education for massage therapy? If not, then what do we need to do to improve it? 2) Does accreditation by COMTA specifically improve quality of education compared to other vocational accrediting agencies that do not require curriculum competencies specific to massage in their standards? 3) Would adding competencies at an "advanced" level, or specific degree levels, be helpful in advancing massage therapy in the eyes of other health professionals? And if so, are there any particulars that they would expect to see in such advanced levels of training to consider working with a massage therapist in their own type of practice? MEthodS To answer these questions, a mixed methods approach was used. Education quality was examined quantitatively in terms of measureable educational outcomes including tuition costs, graduation rates, job placement rates, median loan amounts, and repayment rates, organized by type of school or program and by accreditation agency. Types of schools and programs are based on the types used in published data from the US Department of Education's 2011 Gainful Employment metrics. Data was collected by COMTA staff using both internal sources and publicly available data from the US Department of Education Gainful Employment 2011 Informational Rates (17) , as well as publicly available information published on individual school websites. Schools that were clearly identifiable as part of corporate chains were grouped for sub-analysis. Especially for several of the large chains of corporateowned schools, there was no massage program found at the location originally listed in the US Department of Education report, and the apparent closures have not always been able to be confirmed. However, these branches/schools were included in the analysis because they were associated with a repayment rate, and the estimated number of closures in and of itself is relevant data. According to the 2013 ABMP schools survey, the number of massage programs overall has decreased from a high point of 1,600 in 2009, to 1,440 in 2011, to 1,310 in 2013 (11) . As an additional indirect measure of educational quality related to accreditation, the numbers of disciplinary actions again practitioners in states that require graduation from an accredited school were compared to the numbers in states that do not, over a three-year time period of 2009-2011 across all regulated states. The data were collected directly from state massage regulatory agency websites where possible, and through contacting the agency directly when such information was not published online. To complement the quantitative data analysis, individual and focus group interviews regarding the quality of massage education were conducted with two groups. The first consisted of massage educators/practitioners recruited from the AFMTE, ABMP, and AMTA. The second group consisted of complementary and integrative health care (CIHC) educators/practitioners recruited from the Consortium of Academic Health Centers for Integrative Medicine (CAHCIM) and the Academic Consortium for Complementary and Alternative Health Care (AC-CAHC). These interviews informed the development of two parallel surveys focused on the current quality of massage education. The survey content, wording varied widely, ranging from $2,392 for a certificate that could be completed in six months, to as much as $46,845 for a two-year associate's degree at a private institution. Longer programs at for-profit and corporate schools generally had higher tuition costs, averaging $13,505 and $16,562, respectively. Of these, the longest programs tended to be community college programs leading to associate degrees over three to four semesters and with a much lower average cost of $5,647. Certificate programs offered through CAM universities had an average cost of $10,768. Outcomes, including graduation rates and placement rates, are also allowed to be calculated using more than one method. Standards for reporting 'ontime' graduation rates for the USDE were changed during the time this evaluation was conducted, and do not always consider the total number of students who started a program and graduated within the same cohort, a measure that many consider to be more closely related to educational quality. The same variation in calculation methods also applies to job placement rates; some schools use their pass rates on licensing examinations in lieu of actual job placement. Massage programs in public institutions presented the most difficulty in finding the required outcomes data. These programs do not have to consistently follow the Gainful Employment requirements and often have additional state regulations to follow. Often only rates were provided for the institution as a whole or for the three largest programs (which generally do not include massage). Rates are listed when they could be found, but there are numerous omissions. All outcomes were averaged by type of school, and these results are also presented in Table 1. Average reported graduation rate across all programs was 71.9% and reported job placement rate was 95.6%. These numbers are very likely to be overestimates, especially when examined in light of the financial aid data. Of the schools and programs that reported student loan data, 84% of students at those institutions received federal financial aid. The median loan amount was $8,052. The average percentage of all massage therapy program students included in this analysis who repay their loans is only 43.4%. Average tuition costs and educational outcomes for each accreditation organization are listed in Table 2. COMTA-accredited schools and programs show an average tuition cost that is below the reported national average and below that reported for for-profit schools, and have the highest repayment rate among all accreditation organizations. Most massage therapy accreditation organizations accredit institutions; COMTA is the only one of these that offers programmatic accreditation specific to massage therapy. NACCAS, which primarily accredits schools offering training in cosmetology, skin care, massage, and related subjects, is a close second in terms of repayment rates, and has the lowest average tuition cost. of individual questions, and answer choices were based on information collected during the qualitative interviews. Both surveys were administered via weblink to allow respondents to complete the surveys anonymously and encourage unbiased responses. The massage educators' version was sent via the AFMTE e-newsletter and ABMP and AMTA school newsletter distribution lists. The CIHC educators' survey was emailed to members of CACHIM and ACCAHC, with the goal of reaching a comparable audience of integrative health care educators knowledgeable about massage therapy, yet outside the massage profession. Both surveys contained response options for open-ended comments. IRB review of the evaluation was performed by Solutions IRB, a licensed commercial IRB review provider, and approval for the study under the category of exempt research was obtained for all phases and methods used in the evaluation prior to its start. Summary of Schools data Of the 487 schools from which publicly available data were obtained, 386 programs reported tuition costs, with program lengths varying from six months up to two years. Whenever a school offered multiple massage programs of varying lengths, costs were averaged. In most cases, tuition cost was taken from the Gainful Employment disclosures. However, occasionally it was not reported there, so COMTA staff gathered it from other places on the school's website or catalog. Staff attempted to maintain consistency on how the tuition cost was calculated, but consistency was not always possible. For example, some schools include licensure fees, books, and supplies added to the direct tuition costs, where others do not, and these details were often not specified. For this reason, cost should be considered an approximate number. A comparison of average costs by type of institution is shown in Table 1. With these caveats in mind, the average tuition cost across all schools/programs was $13,605. Costs states, failure to pay child support, student loan default, or failure to maintain documentation of CEs are all considered grounds for disciplinary action; some actions are published on state websites as part of the public record, while others are kept confidential. In some cases, there were no numbers to report because a state had only recently implemented regulation. In addition, differences between states on reported numbers seem to indicate how actively the individual board pursues disciplinary action, rather than whether more unlawful practice occurs. For example, many Summary of practitioner disciplinary Actions data Figure 1 shows the total number of disciplinary actions by state, indicated by black triangles, over the three-year period of 2009 to 2011. It should be considered as a very rough estimate, as it includes both suspensions or license revocations, as well as only penalty fines. It is difficult to verify the accuracy of this data and/or compare states due to the variance on what is considered "disciplinary action." In some answer this question. These results are summarized in Table 3. Respondents were evenly distributed geographically across the US, with no Canadians, and 1% of respondents reported living outside the US or Canada. The majority of educators reported that 55% teach part-time, 30% teach full-time, and 15% work in administration only and do not teach in the classroom. The majority of respondents teaching part-time work in schools owned by private individuals (51%), as traveling continuing education providers (27%), in corporate-owned schools (22%), community college programs (14%), and online (5%). Those who teach full-time work in schools owned by private individuals (34%), corporate-owned schools (33%), community college programs (25%), as a continuing education provider traveling to different locations (6%), and teaching online (1.4%). The majority of educators (57%) reported that they maintained at least a part-time practice, with 21% maintaining a full-time practice and 22% reporting no clinical practice. Of those educators with a clinical practice, 75% work in a private practice setting alone or with other massage therapists, 19% in a mobile or onsite setting, and 12% in a spa or salon setting. Sixteen percent work in an integrative setting with health care providers from other disciplines; only 5% work in a hospital or other facility such as rehab or states showing no actions were contacted by COMTA staff to collect this information, but no actual numbers were able to be obtained, despite more than one attempt. It is likely that the total numbers shown here underrepresent the actual number of serious legal and ethical violations, as these are likely to be underreported to state boards. High numbers of disciplinary actions in a state are usually due to a large number of relatively minor infractions. The state of Mississippi is a good example. Between 2009 and 2011, there were 170 disciplinary actions. Of these, only 6 were ethical violations resulting in suspension or license revocation; the other 164 actions were fines for failing to pass a CE audit. However, even allowing for measurement error and the confounding effects of population and practitioner density, the magnitude of difference between the total numbers of sanctions against practitioners in regulated states that require graduation from an accredited school versus a nonaccredited school is large. Of the five states and the District of Columbia that require school accreditation, there were 208 sanctions from 2009-2011. Most of these (170) were in Mississippi and 26 were in Maryland. Of the remaining 28 regulated states for which we have data and that do not require school accreditation, there were 1,702 sanctions during the same period. The ratio of disciplinary actions to states is 208:6 versus 1,702:28, or an average of 34 in states that require school accreditation versus 61 in those that do not. Summary of survey results for massage educators The survey of massage educators was sent to email distribution lists of the AFMTE (938 possible respondents) and the schools and educators newsletter for the Association of Bodywork and Massage Professionals (4,000 possible respondents), reaching a total of 4,938 possible respondents over a three-week period in February 2013. A follow-up reminder was sent two weeks after the initial email. The survey was also sent to the member schools of the American Massage Therapy Association; however, only one respondent from that organization completed the survey. From the AFMTE weblink, 198 massage educators responded, and 239 from the ABMP weblink, for a total of 438 respondents, a 9% rate of return, which is not unusual for an online survey distributed using this method (18) . Demographic data showed that the majority of respondents (71%) were female, 28% were male, and 2% preferred not to answer. The average age was 51, and the average number of years of experience as a practitioner was 17, with an average of 11 years of experience as an educator. The majority were white/ Caucasian (85%), followed by mixed (4%), Latino-Hispanic (2%), Asian (1.5%), and African-American (1%). Five percent of respondents preferred not to as frequently or slightly more frequently. Complete results are presented in Table 4. In making a referral to a massage therapist for their own patients or colleagues, the most important factor was personal knowledge or direct experience with the practitioner, followed by a state-recognized credential to practice, and word-of-mouth recommendation from a respected source. The least important factors were the practitioner's amount of academic education, which massage school the practitioner attended, and extended care, and 2% in a community health clinic or free clinic. Figure 2 summarizes these results. When asked to select necessary competencies educators wanted a massage therapist colleague working in a clinical setting to have, respondents selected the following competencies most often: professional appearance and demeanor (99%); proficiency in applying therapeutic techniques to benefit the patient (97%); good oral and written communication skills (97%); and clinical judgment-ability to modify treatment to the individual patient (96%). Patient intake interviewing skills (94%) and therapeutic relationship skills (94%) were valued equally. Also frequently selected were interprofessional collaboration (90.5%), ability to develop a treatment plan (90%), and ability to assess treatment outcomes (86.5%). Research literacy was selected by almost half of all respondents (48%), and advanced or specialized training in orthopedic or rehabilitation massage was deemed necessary by 43% of respondents. Least frequently selected competencies considered necessary were other advanced or specialized trainings in oncology massage (15%), geriatric massage (18%), pre-and perinatal massage (19.5%), and other competency or advanced training (23%). When asked to describe these, groups of techniques such as Swedish and Eastern or individual techniques such as myofascial release were specified. Only 25% selected familiarity with electronic medical records and 36.5% selected advanced or specialized training in medically oriented massage as a necessary competence. In choosing a personal massage therapist to see oneself, the pattern of competencies considered necessary was similar, with general competencies selected more often, and advanced or specialized training in working with specific populations selected less often. However, the necessary competencies for colleagues working in a clinical setting were selected 5%-10% more often compared to one's own personal therapist, and interprofessional collaboration was selected almost 30% less often. The exceptions to this trend were advanced or specialized training in orthopedic or rehabilitation massage, and advanced or specialized training in other medically oriented massage, where massage educators selected these Table 6 below. Typical comments for this question emphasized competency-based education, along with fundamental knowledge and skills, and included: "Competency based education. Greater emphasis on critical thinking and reasoning." "Uniformity between states, practical exam for all, minimum educational competencies-not just hours." "More educational hours on A&P, Pathology, and developing a treatment plan for individual clients." "Not necessarily more hours, but better hours on communication and other skills. The academically based program that I envision would be voluntary, not mandatory. No cycling MT students into a program without appropriate prereqs or prep. Skills-based education rather than hours-based." "More hands on hours & internship. 50% of the hours should be hands on. I have seen schools that emphasis [sic] academics and therapists come out with poor hands-on skills while schools that do not emphasis academics have better hands on but lack ability to understand how & why massage is helpful for the patient." Comments also pointed out that not all therapists want to work in health care settings, and proposed a two-tiered level of education: "Again, I am not sure that it needs to be improved until we decide as a profession what we want a beginning student to know. Actually I think we should do as many professionals do and have different levels of education depending where the therapist wants to work...like LPN or RN, PTA or PT...and so on. While I would love to have all students want to really expand themselves, the truth is a lot of students want to only practice stress relieving their amount of continuing education. Somewhat important were the number of years in practice and the general reputation or having heard of the practitioner. When asked for their opinion of the current quality of massage education nationally, 75% of respondents stated that the quality is inconsistent, and 55.7% agreed that quality needs to improve if massage therapists want to be considered comparable to other allied health professionals such as physical therapy assistants. Only 10% agreed that quality is adequate. Complete results are presented in Table 5. Comments for this question were often critical of current massage education quality but diverged regarding how to address it. One respondent commented that "I believe the medical community will continue to shut us out unless we step up our abilities to meet them in the clinical world." Another stated that "I believe the profession needs to require academic degrees, but I believe that this is an idea ahead of its time," and that "massage education is outdated. It needs to revamp into the 21st century; ethics, conduct, working with diverse populations, communication-for today's consumer!" Another respondent held an opposing opinion about how to improve the quality of education, noting that: "The quality is generally poor and getting worse. Most efforts to "improve" it are focused on cognitive learning that is largely irrelevant to the practice of massage. Stethoscope envy has us focused on the ridiculous goal of becoming accepted by the scandalous allopathic model of sickness maintenance. The nature of the questions and responses in this survey leave me little hope for it's [sic] future. I fear genuinely gifted massage practitioners will soon be driven back underground as they have been traditionally throughout history. What a shameful price to pay for the popularity of this approach to healing!" When asked if the quality of massage education needs to be improved, 86% said "Yes", 4.6% said "No", and 9% answered "I don't know." When asked what needed to be changed to improve the quality of massage education, better teacher training was the MENARD: QUALITY EDUCATION IN USA "More equal emphasis and teaching of the art as well as the science (however challenging that may be, it is very important)." One respondent went further, stating: "This should really NOT be a discussion about the quality of education but about strategically organizing massage education as a whole in the U.S. With 250 modalities available and multiple submarkets in the massage field, there is definitely room to start discussing the implementation of an Associate Degree as a minimum standard and a Bachelor Degree in Massage Therapy as a goal for 2020, making sure that there is a smooth transition to an even-higher standard." Summary of Survey results for CihC Educators Members of CACHIM (1073) and ACCAHC (204) were sent individual emails by their respective executive directors for a total of 1,277 possible respondents. Follow-up reminders were sent two weeks after the initial email request to participate. Of the total possible respondents, 145 or 11% completed the survey, a typical rate of return for an online survey. Of those, 25% identified their primary discipline as medicine or integrative medicine, 10% as acupuncture/Oriental medicine, 6% as nursing, and 4% as chiropractic. Other professions represented included psychology/counseling/social work, yoga therapy, physical therapy, naturopathic medicine, ayurvedic medicine, homeopathy, nutrition, and dance/movement therapy. Roughly 20% selected "Other" and described their primary discipline as medical education, occupational therapy, and research. A surprisingly large number of respondents (32%) identified their primary discipline as massage therapy/bodywork/ somatic education, perhaps due to the number of massage educators within ACCAHC. Results were filtered to exclude those identifying massage therapy as their primary discipline, and only the results of the 97 non-MTs respondents are reported, in an effort to distinguish the views of non-massage therapy educators separate from those of massage therapy educators. In terms of respondent demographics, 69% were female, 29% were male, and 2% preferred not to answer. The average age of respondents was 50 (± 11), and the majority were white/Caucasian (73%), followed by Asian (12%), mixed (5%), African-American (2%), and Latino-Hispanic (3%). Approximately 4% of respondents preferred not to answer. These results, together with respondents' disciplines, are summarized in Table 7. Respondents were evenly distributed geographically across the US, with a small number (6%) of Canadians. The average number of years working in education was 16.86 (± 11.35), and 78% maintain either massage...and what a gift to mankind! I don't want to lose that in our quest to be medical wannabes... because if we are going to work in hospitals and think we are going to get paid for the work we do, we are going to need to look at massage in an entire different light." In terms of the role of accreditation, 50% of massage educators believe that accreditation does improve the quality of massage education, 36% believe it doesn't, and 14% don't know. Forty-one percent (41%) believe that program accreditation specific to massage therapy is superior to general institutional accreditation that does not specify curriculum competencies for massage therapy, while 31.6% think it is not superior and 27% don't know. Comments pointed out that, while accreditation can help improve quality of education by outlining standards for curriculum content, it can also have negative consequences through poor implementation and its use as a means to qualify for large amounts of Federal financial aid. Several respondents cited corporate schools as an example of poorer quality education prone to abuse of financial aid, stating for example, that "corporate schools are only looking for money." Other comments were very critical of the lack of quality in corporate programs: "Most of the graduates from career schools/corporate schools don't have the quality education that is found at private schools. Since most graduates come from the corp/career schools, the quality there needs improving greatly. Cost does not equal quality in those schools. For the best training, massage therapists need to attend private schools, where the school personnel truly care about helping them be the best massage therapists, rather than the only focus being on the student's money. Corp/career schools can't keep instructors, turnover is a huge issue, they pass students with a grade of 60 (really?), and the instructors who do teach there are not qualified to teach most of the subjects. There are quality programs out there, most are at private, smaller schools. That is why the massage therapists from the private schools are in such demand." A large number (112) of respondents wrote detailed and varied comments about what they believe is necessary to improve the quality of massage education. Overall, most comments were supportive of massage education becoming more academically based, for accreditation that is specific to massage therapy and bodywork, and accreditation that is competency-based. Some called specifically for degree-based programs, as well as for increasing student admission requirements beyond having a high school diploma or GED. However, several respondents cautioned against raising academic standards at the expense of developing students' hands-on skills. Some typical comments: "More pathology; more rehab skills as in Canada; clinical thinking skills and ability to articulate decision making." Respondents were then asked a series of question about what competencies they considered necessary for a massage therapist serving in different roles: as a colleague or peer practicing in a clinical setting, or as one's personal massage therapist providing services for the CIHC educator/practitioner. Respondents were allowed to select as many competencies as they felt were required for that role. Respondents were then asked what factors they considered most important in choosing a practitioner to whom they would want to refer their own patients or clients for massage therapy. All answer choices were developed based on responses from previous individual and focus group interviews with both massage and CIHC educators, and included an optional section for comments. The most frequently selected competencies considered necessary for massage therapist colleagues to have included: clinical judgment-ability to modify treatment to the individual patient (96%); interprofessional collaboration or ability to work as part of a team (96%); professional appearance and demeanor (94%); and good oral and written communication skills (92%). Therapeutic relationship skills (93.5%) were selected almost as often as proficiency in applying therapeutic techniques to benefit the patient (92.4%). Ability to assess treatment outcomes (88%), ability to develop a treatment plan (85%), and intake interviewing skills (83%) were also frequently rated necessary. Research literacy-ability to find and critically evaluate relevant health care research (52.2%)-and familiarity with electronic medical recording or charting (51.1%) were considered necessary less frequently. Competencies with the lowest frequencies included advanced or specialized training in areas such as geriatric massage (15%), pre-and perinatal massage (21%), oncology massage (25%), orthopedic or rehabilitation massage (36%), and other medically oriented massage (38%). A possible explanation for the advanced/specialized training areas selected less frequently as necessary was noted in many of the comments for this question-respondents thought that only therapists working in a clinical setting with these specific populations needed to possess such specialized training. As one respondent put it: "I would like someone I call "colleague" to have advanced training for whatever population they were working with-for me that happens to be oncology. It wouldn't be as relevant if they worked in the clinic on a lot of postsurgery (not oncology surgery specifically). So, I mean to indicate advanced training if they are working with special populations. Otherwise, that seems unprofessional and I wouldn't want to refer to them as a colleague." Comments also indicated that many respondents viewed ongoing continuing education to develop new skills as a necessity for professional development, and something they would expect of any peer or colleague. a part-time (40%) or full-time (34%) clinical practice in addition to their educational role. Practice characteristics showed fewer respondents in private practice settings compared to massage educators, with the majority practicing in hospitals or similar settings. These results are summarized in Figure 3. Twenty percent (20%) reported that they currently teach full-time, 58% currently teach part-time, and 23% serve in administrative positions and do not currently teach. The majority of respondents consider themselves at least somewhat knowledgeable about massage education (38%), with 24% rating themselves as moderately knowledgeable, and 22% as very knowledgeable. Only 16% rated themselves as not at all knowledgeable regarding massage education. the previous factors, as was the practitioner's general reputation. Educational factors, such as which massage school the therapist attended and amount of academic and continuing education, were rated as the least important factors. MENARD: QUALITY EDUCATION IN USA Respondents were asked their opinion regarding the current quality of massage education nationally. The majority (58%) agreed that the quality is inconsistent. Complete results are presented in Table 10 and are contrasted with massage educators' opinions. While both groups are in agreement that quality needs to be improved, more massage educators believe that current quality is inconsistent and needs to be improved if MTs want to be considered comparable to other allied health professionals. Typical comments from CIHC educators in response to this question included: "There are more MT education facilities, but what I hear from my clients is that many experiences have been sub-par and nonspecific." In choosing a therapist to see for oneself as a client or patient, the competencies that CIHC educators chose followed a similar pattern, with general competencies selected more often and specific competencies such as working with various special populations selected less often. Overall, the same competencies judged as necessary for a colleague in a clinical setting were chosen less often for one's personal therapist. Competencies most often selected as necessary were professional appearance and demeanor (89%), weighted equally with clinical judgment and proficiency in applying therapeutic techniques (89%). Therapeutic relationship skills (88%), good oral and written communication skills (71%), ability to assess treatment outcomes (71%), ability to develop a treatment plan (67%), and intake interviewing skills (60%) were also frequently identified. The least frequently selected competencies for one's personal therapist were advanced/specialized training in oncology massage, pre-and perinatal massage, and geriatric massage (6.5%), followed by familiarity with electronic medical records or charting (16%), and advanced/ specialized training in other medically oriented massage (24%). About a quarter of respondents selected advanced/specialized training in orthopedic or rehabilitation massage as necessary (26%), and research literacy (24%) as a necessary competency for one's own therapist. One respondent commented: "As a basically healthy person who generally seeks massage for basic support and rest, I want a therapist who can listen, be present and pay attention to what he/she feels in my tissue while working with me. I appreciate advanced training for what it seems to say about a practitioner's commitment to his/her development." Or, as another respondent put it simply: "Knows how to give a good massage." Complete results are presented in Table 8. Generally, CIHC educators considered the same competencies as necessary at almost the same frequency as massage educators, usually within 5%. Intake interviewing skills and ability to develop a treatment plan were listed slightly more often by massage educators, while familiarity with electronic medical records or charting was listed twice as frequently by CIHC educators compared to massage educators. A comparison of the necessary competencies to consider a massage therapist as a peer or colleague in a clinical setting by massage and CIHC educators is presented in Table 9. Respondents were then asked to rank the factors they considered most important in choosing a therapist to whom they would refer their own patients or clients. The most highly ranked factor was direct knowledge or personal experience of an individual therapist. A word-of-mouth recommendation from others they respect was the next most highly ranked, followed by a state-recognized credential to practice. Number of years in practice was also considered important, but secondary to MENARD: QUALITY EDUCATION IN USA In response to the question, "Do you think that the quality of massage education needs to be improved for massage therapists to be seen as comparable to other complementary or integrative health care professionals, such as acupuncturists?", 61% of respondents answered "Yes", 8% answered "No", and 31% answered "I don't know." Some comments specifically pointed out that the lack of consistency in education is a problem: "The inconsistency of massage education and licensing requirements makes it hard to evaluate massage as a single profession." "Some schools are very high quality. It would be good to have more uniformity." "Consistency of massage education, perhaps." Respondents also commented that other providers need to be better educated about massage therapy, and that massage education should teach enough pathology to recognize more serious conditions that require referral: "What will make a difference is education of health professionals on effectiveness of massage for medical conditions-also improved interdisciplinary communication is what is necessary in order to become part of established conventional care." "More diagnostic classes need to be taught for massage therapists to be able to recognize potential MENARD: QUALITY EDUCATION IN USA "The requirements for admission to programs might need to be higher." "I have worked with incredibly skilled, incredibly knowledgeable, advanced practice LMTs who practice medical massage therapy. But I do not believe they are the norm as far as licensing, credentialing, continuing professional development." "With the direction of massage therapy being integrated into more clinical environments, such as hospitals and medical clinics, the overall/ general education of massage therapists is vastly inadequate. The demand for massage therapists with higher levels of clinical training far exceeds the number of qualified caregivers." "Too much fluff and buff and too little therapy. Needs more awareness of massage as a body-mindspirit intervention in which the client becomes an active partner in the therapeutic endeavor. Also "Improve education about primary anatomy and physiology. Integrate across musculoskeletal and meridian systems and connective tissue and neurology." "No practice can be specifically = to another. In my opinion, the public has more confidence when they see/are aware of an academic degree (whether or not necessary). Consistency in thorough education in A&P, Kinesiology, empathetic communication, clear documentation skills, and activity analysis are all necessary for a comprehensive, effective massage therapy session." "With the psychosocial and communication skills, a consistent education of culture and the diversity of our nation. My academic background in cultural, social and developmental psychology has served me well and often in the hospital and oncology setting. I've seen other providers, usually new, flounder with ignorance working with multi-cultural patients (i.e., so much prejudice against Muslims or assuming Pakistanis are from India, etc.). Better teaching training. I look good on paper for massage and teaching-having taught university and practiced massage. Teaching massage is very different! Maybe ongoing staff trainings, also including cultural education. When I taught massage, a revered teacher was making the assumption that those who might identify as African-American or black, were inherently less smart because their (her students) communication skills were not like hers. This came out in a teacher development day." Regarding accreditation of massage education, 54% of CIHC respondents believe that accreditation generally improves the quality of massage education, with 9% answering "No" and 37% responding "I don't know." However, the majority of respondents were unaware of the difference between programmatic versus school or institutional accreditation. When asked whether programmatic accreditation specific to massage therapy was superior to general institutional accreditation that does not specify curriculum competencies for massage therapy, more than half of respondents, 53%, answered "I don't know." Forty percent answered "Yes" and 8% of respondents answered "No." Final comments from CIHC educators on what is needed to improve quality of massage education included suggestions regarding accreditation and specific curriculum content: "The organization who is in charge of the massage education should ensure the quality of the massage schools." disease processes for referral to other medical specialties if the massage therapist wants to be on par with other medical professions." Respondents who commented on this question generally endorsed the idea of more academically based education and higher admissions standards. Some, however, were more cautious in their responses. One said: "It depends who you ask. Do massage therapists want to be seen as such? Most likely some do, and they will be the ones pushing for this. However, some don't, and they might not care. I am not saying improving education is bad or good, I am suggesting that not all massage therapists want to be seen in that medical light." Those who responded "Yes" to the previous question were then asked to specify what would improve the quality of massage education, and these responses are presented and compared to those of massage educators in Table 11. In almost all areas listed, a larger percentage of CIHC educators agreed that these actions would improve quality, compared to massage educators. The largest differences were interprofessional education, supervised internship or practicum placements, and academically based programs. The only area of agreement was more time to develop psychosocial and communication skills. Other responses covered a variety of topics, from anatomy to cultural competence, and included: "More whole body systems interconnectedness, more disease-specific/etiology driven, organ-specific protocols, mind-body medicine skills, energy medicine, and therapeutic counseling skills." "Program entry requirements other than age 18 and a credit card. Even nursing, PTA, and OTA programs have prerequisites." MENARD: QUALITY EDUCATION IN USA gainful employment rates or other related information, and many do not provide this information on their websites. Some schools provided data on one or more outcomes, but not all outcomes of interest. Data from non-accredited programs are difficult to obtain and could not be included. Thus, the results presented here may not be representative of all US massage schools/programs, particularly for non-accredited schools and programs that graduate fewer than 30 students annually, and so these numbers should be interpreted cautiously. Based on the available data, the average tuition cost for a massage program nationally is $13,605 and this cost varies a great deal, depending on the type and length of the program. The national average loan repayment rate is only 43.4%, indicating that more than half of massage program graduates have difficulty repaying their student loans. In comparison, the average tuition cost of corporate programs ($16,561.77) is higher than the national average, and these programs had a relatively high median loan burden of $9,998.85, with the lowest repayment rate (41.3%). Average tuition costs at all other for-profit schools are somewhat lower ($13,505.24), with a slightly lower median loan burden of $8,228.05 and slightly higher repayment rate of 46.7%. Tuition costs at community colleges ($5,647.05) are considerably lower than the national average, and these programs show the lowest median loan burden ($2,004.06). No repayment rate data were available for community college programs; however, the relatively low loan burden makes it more likely that repayment rates are higher than those for corporate and for-profit programs. Programs with the highest average repayment rate (83.45%) are those based in CAM universities. These have a lower average tuition cost of $10,768.40, with a median loan burden of $9,871.75, which is comparable to the loan burden of corporate programs, but a repayment rate that is almost double. By these metrics, community college and CAM university-based programs appear to offer the best value for cost, followed by for-profit programs. Overall, corporate programs appear to offer the least value for cost. Data analysis of tuition costs and educational outcomes shows that some accreditation organizations have poorer outcomes than others. Tuition costs at ACICS-accredited massage schools have the highest average tuition cost ($18,581.28) and highest median loan burden ($11,532.50), with the lowest repayment rate (39%), which appears to indicate poor value for cost. Tuition costs at schools accredited by other organizations have fairly comparable costs, with NACCAS-accredited schools showing relatively lower average tuition cost ($9,253.98) and a relatively higher repayment rate of 59%. Programs accredited through COMTA have the highest repayment rate (61.00%) with a moderate average tuition cost of $12,592.36, slightly below the national "Most accreditation is not so important because it is not massage specific enough. Good, in-depth accreditation could make a real difference." "Accreditation is an expensive process. Some schools will go with whatever program will accredit them at the less expensive price. Quality of education then suffers, in my opinion. Also, most schools have low requirements. Every new graduate of a massage therapy program would benefit from mentoring upon graduation. Every single one." "Massage therapists are working in hospitals caring for the suffering of many seriously ill patients. They need training and confidence to work effectively and safely with these patients and their family caregivers, and they need to act professionally and learn to work as a part of an interdisciplinary medical team. Massage therapists no longer only work in spas and health clubs and the education needs to reflect this change in modality application." "I'm aware of a well established, 750+ hr requirement, "nationally recognized" school that produces MTs that can't provide a good, general massage for a healthy client. And, I know small schools with lower hourly requirements that produce excellent practitioners. I hope we remember to focus on quality first and foremost, not quantity for quantity's sake... One grad of the first type is actually very angry that she went through 750 hours, got her CMT and was told by several potential employers that she just doesn't have the basic skills. And as I know her, she is not a "bad" student or disengaged learner... just poor instruction and little to no clinical feedback." diSCuSSioN The quantitative results on educational outcomes presented here can only be considered an approximation, due to the different ways that schools are allowed to report their numbers to their respective accrediting agencies and to the Department of Education. Graduation rates and job placement rates, in particular, are likely be optimistic estimates, as most programs have a direct incentive to 'massage the data' to have these numbers appear in the best possible light. The financial aid data, especially loan repayment rates, probably paint a more realistic picture. The ability to repay student loans indirectly indicates that a graduate is employed, but whether they are employed as a massage therapist and making a living wage is unknown. The majority of the schools included in this analysis participate in Title IV. Schools that do not participate in Title IV are not required to publish US health care, particularly in the management of common chronic conditions, and massage educators should take heed. Massage educators appear to have a more negative view of the inconsistency of massage education compared to CIHC educators, as higher percentages of massage educators agreed that the quality is both poor and inconsistent, and that it needs to improve to be seen as comparable to other allied health providers. However, a higher percentage of CIHC educators had no opinion about the quality of massage education, which could account for this difference. Given that educational quality is perceived as so variable, it is not surprising that personal experience or direct knowledge of a practitioner is the single most important factor in choosing a therapist for most respondents, whether they are massage or CIHC educators. Despite their more negative perception of the quality of massage education, massage educators do not agree with CIHC educators about what is needed to improve quality. More CIHC educators agreed that longer program time, more academically based programs, more interprofessional education, and a requirement of supervised internships or practicum placements would improve quality, compared to massage educators. The comments of massage educators showed a good deal of support for these means of improving educational quality, as long as the wellness and 'mind-body-spirit' orientation of massage therapy is maintained, together with an emphasis on proficiency in practical application-being able to give a 'good' massage. Both groups suggested that raising admission requirements to massage programs is a necessary step in improving quality. Massage educators' comments also appear to support competency-based educational standards, rather than an hours-based standard. There is a current trend across all sectors of postsecondary education to view education as a commodity (19) , and much has been written about the corporatization of higher education in recent years. Massage therapy education is no exception, as evidenced by the increased number of corporate-owned chains of massage schools and programs within career and technical school chains over the past 15 years, even though their rapid growth has slowed somewhat since the Great Recession of 2008. Some would argue that massage therapy itself has become a commodity, based on the rise of franchises that offer reduced rates for consumers, and the development of discounted provider networks such as American Specialty Health, that offer reduced reimbursement to providers in exchange for referrals. From this perspective, massage therapy and massage therapy education are arguably victims of their own success. Clearly, based on the data presented here, the quality of massage education in the United States is inconsistent and inadequate, whether it is assessed quantitatively or qualitatively. This inconsistent average overall and below the average cost of noncorporate for-profit schools. The median loan burden for graduates of COMTA programs is almost twice as high ($7,969.11), compared to graduates of NAC-CAS programs ($4,101.11), yet their repayment rate is comparable. By these metrics, COMTA-accredited schools and programs appear to offer the best value for cost. These results also suggest that programmatic accreditation offers good value for cost, compared to institutional accreditation. Accreditation in general appears to make some difference in the numbers of practitioner disciplinary actions. Despite some probable measurement error, among the regulated states there are substantially more actions-the average number of disciplinary actions to states is almost twice as many-against providers in states that do not require graduation from an accredited school compared to those that do. Even taking error and confounding into account, school accreditation still appears to be moderately correlated with fewer practitioner sanctions. More research is needed to confirm these preliminary findings. Both massage and CIHC educators can be considered highly informed consumers of massage therapy. It is interesting to see that both groups have different expectations regarding the competencies considered necessary for massage therapists that depend on the role of the therapist and practice setting. There is substantial agreement between massage educators and other complementary and integrative health care educators regarding the competencies each group considers necessary to see a massage therapist as a colleague or peer, and separately in the role of one's own personal therapist. Advanced level competencies in specialty areas of practice are considered less important than general competencies overall, by both groups. But, while CIHC educators selected these much less frequently, from their comments it is clear that they assume and expect that someone working with a particular population, such as oncology patients, orthopedic patients, or geriatric and pediatric patients in a clinical setting, should have specific training and/or credentialing in these areas, just as they assume all massage therapists are credentialed to practice in their state, an assumption which surfaced during many of the initial interviews. It is not surprising that each group is willing to accept a lesser degree of competency in some areas that may not be applicable to them on an individual level, as long as the therapist is generally proficient and can give a massage that is satisfying to the individual client. One area of notable disagreement between the two educator groups is familiarity with electronic medical records or charting records. Along with interprofessional collaboration and research literacy skills, programs that aim to prepare massage therapists to work in clinical health care settings would do well to include these in the curriculum. Interprofessional practice and education is a growing trend in accreditation does indeed reduce ethical violations by practitioners, then any credentialing examination should require graduation from an accredited school or program to sit for the examination. Another recommendation is that massage programs consider raising admissions requirements to include two years of college or other vocational education, a recommendation made by many respondents in both the massage and CIHC educator groups surveyed. Only 10% of current AMTA-affiliated massage therapists list a high school diploma as their highest level of education, according to the most recent AMTA survey (12) . Sixty-five percent report 'some college' or higher, although it is certainly possible that some of those respondents are counting their massage training as 'some college'. Currently, 30% report completing a bachelor's degree. It would be interesting to see to what extent academic education is correlated with income from massage and/or career longevity, and what other characteristics leading to career success could be identified through additional education research. Such research might also specify useful criteria for admission to massage programs. Proprietary schools might consider developing articulation agreements with a community college or even a four-year, bachelor-level program. This strategy could allow smaller proprietary schools to partner, rather than compete, with community college programs, while still maintaining high standards of hands-on training and a whole-person philosophy of practice. Through such agreements, community colleges could provide access to remedial education for massage therapy students who lack sufficient reading, writing, and math skills. Community-based partnerships to develop supervised clinical internships or practicum placements should also be explored, as well as ways to create career paths for full-time massage therapy educators who have training in adult education. Massage educators might consider joining forces to create cooperative nonprofit schools where they could be salaried employees with benefits, as opposed to part-time contingent faculty. Teaching is a separate skill set from clinical practice, and being proficient as a practitioner does not automatically make someone a competent educator, even to teach clinical, hands-on skills. The US Department of Education recently proposed revisions to how gainful employment data will be calculated and used to qualify institutions for offering Federal financial aid. How this will affect massage schools and programs, especially proprietary schools, remains to be seen. One step that would be helpful is for massage programs to reach consensus on how to measure graduation and job placement rates, and do this consistently so that more accurate comparisons can be made. Ideally, this information would be compiled and maintained by a massage-related organization with no actual or perceived conflict of interest in any individual or group of programs. Compiling and maintaining this quality undermines the integrity and perceived value of massage therapy education and, consequently, the integrity and value of massage therapy as a profession. Integrity is jeopardized when any educational provider or massage practitioner performs or is perceived to perform poorly, raising concerns about the quality of training offered by all educational providers. If the educational process that produces massage practitioners is unreliable, then the reputation of all practitioners is damaged by those who complete an educational program, pass a qualifying examination and become credentialed to practice, and yet cannot perform a massage to the satisfaction of the consumer. The current changes that are rapidly happening in the larger health care landscape hold tremendous opportunities for massage therapy as a discipline. At the same time, unless educational and regulatory standards can evolve to keep pace, massage therapists who wish to practice as integrative health care providers are at high risk of being shut out of these opportunities. CoNCluSioNS & rECoMMENdAtioNS Returning to the original questions that framed this evaluation, we can conclude that accreditation does improve the quality of massage education and, at the same time, that there is much room for improvement. Knowledgeable and experienced educators both inside and outside the massage profession are in agreement on this point. COMTA accreditation, in particular, does appear to offer better value for cost, compared to other accreditation organizations that do not require curriculum competencies specific to massage therapy. Adding competencies at an advanced level would be helpful to some extent in advancing the perception and status of massage therapy in the eyes of other conventional and integrative health care professions. However, raising admission requirements to massage programs, moving to longer and more academically based programs, including degree programs, and requiring supervised clinical internships or practicum placements would be more effective in raising the perceived quality of massage education. Including more interprofessional education, such as the skills needed for interprofessional practice and for using electronic medical records and charting, along with research literacy skills, are necessary from the viewpoint of CIHC educators, but are not equally valued by massage educators. The data suggest several recommendations for improving the quality of massage education. One is that data on ethical and legal violations of massage therapy standards of practice should be compiled according to consistent criteria and maintained in a single registry that includes information on the practitioner's training institution, to facilitate accurate recordkeeping and future research. Ideally, this registry would be maintained by an umbrella organization, such as the Federation of State Massage Therapy Boards. If Cost data were taken from the United States Department of Education's published Gainful Employment 2011 Informational Rates data tables; COMTA staff contributed additional existing data collected from public sources. The author alone designed the study and survey instruments, collected the survey data, analyzed and interpreted all study data, and prepared the final report upon which this manuscript is based. COMTA gave permission for this report to be published in its entirety. Copyright Published under the CreativeCommons Attribution-NonCommercial-NoDerivs 3.0 License.
2017-06-30T13:07:56.937Z
2014-06-22T00:00:00.000
{ "year": 2014, "sha1": "e15a073301e5ab30527489f5c0e2d8f743d42956", "oa_license": "CCBYNCND", "oa_url": "http://www.ijtmb.org/index.php/ijtmb/article/download/248/297", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "e15a073301e5ab30527489f5c0e2d8f743d42956", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Medicine" ] }
72800499
pes2o/s2orc
v3-fos-license
Fruit Quality Attributes of Sour Cherry Cultivars The aim of this study is the evaluation of two sour cherry cultivars (“Oblačinska”, “Cigančica”) grown at Cacak (Western Serbia) by determining main physical properties such as fruit linear dimensions, arithmetic and geometric mean diameter, fruit volume, sphericity, surface area, and aspect ratio at the commercial stage. Also, some attributes related to fruit quality (fruit weight, soluble solids, total acidity, ripening index, and pH juice) were evaluated at different ripening stages. “Oblačinska” shows better physical properties, except aspect ratio, when compared with “Cigančica”. Sphericity was similar in both cultivars. Fruit weight, soluble solids content, ripening index, and pH increased during ripening process, whereas titratable acidity decreased over above process. In general, “Oblačinska” had better chemical composition than “Cigančica”. Finally, fruit of both cultivars is suitable for processing and also for fresh consumption. Introduction Sour cherry (Prunus cerasus L.) is an allotetraploid species, originating from a natural hybridization between sweet cherry (P. avium L.) and ground cherry (P. fruticosa Pall.) [1]. The distribution of European sour cherry ranges from the Mediterranean islands to northern Russia, and, within the range there is a wide diversity of different plant habitats and fruit characters [2]. Sour cherries (P. cerasus L.) are most popular as fruit crop and in fruit industry. In 2009, Serbia (having cultivated area with 38,000 hectares and with annual production of 105,353 t) was the sixth largest producers of this species in the world [3]. Also, Serbia can be regarded as important frozen and fresh fruit sour cherry exporter [4]. According to above author, the most common cultivars in Serbian sour cherry orchards were "Oblačinska" and "Cigančica" (also called "Cigány maggy" or "Cigány") that belong to P. cerasus L. The "Oblačinska" is an autochthonous and heterogeneous Serbian cultivar, whereat "Cigančica" native to Hungary and grown worldwide [4][5][6]. Both cultivars displayed great phenotypic variability. The fruit of the above cultivars, especially of "Oblačinska", is of "morello" type, small to medium in size, with dark red and thin skin. The flesh is red, medium firm, juicy, quite sour, aromatic, and of high quality. In addition, the "Oblačinska" is highly resistant to leaf spot and to bitter rot [7]. Also, this cultivar is self-compatible and characterized by regular and high yield. From this point, the numerous positive traits of both cultivars should make it interesting for plantation in other countries. Fruit of sour cherry is a drupe, round or heart-shaped, glabrous, and with pedicel attached [6]. After pollination and fruit set, fruit developed through growth and maturation. Growth represents a quantitative process, which conducts to increase of fruit weight and volume [8]. Fruit maturation is characterized by changes in physiological, biochemical, and morphological treats of the fruit, which determine the qualitative characteristics of any cultivar and finally its depreciation during senescence. Changes in cherries upon ripening are easily apparent by looking at their color change from green to red [9]. Fruit ripening is associated with important chemical changes, and color change is mainly influenced by the concentration and distribution of different anthocyanins in the skin [10]. From this point, total anthocyanins, that is, color in red fruit increases during ripening [11]. Color of red fruit is indicator of maturity and important parameter of commercial harvest date [12]. Additionally, skin of sour cherry fruits was generally bright-to-dark red in color and exhibits less color variation than sweet cherries [6]. Numerous studies have been done on physical and chemical change in sweet cherries during maturation [12][13][14], while studies on sour cherries are rare. Also, no detailed study concerning physical and chemical properties of sour cherry has been performed up to now. The aim of this study was to determine some physical and chemical properties of two sour cherry cultivars during different stages of ripening. Annually, standard cultural practices (pruning, thinning, fertilization, pest control, and treatments) were performed, except irrigation. Fertilizers were applied according to soil analyses. This way ensured the optimum conditions for trees to grow and bear fruit. Material and Methods Weather conditions of Cacak are characterized by the average annual temperature of 11.3 • C and total annual rainfall of 690.2 mm. Soil at the orchard is classified as vertisol according to Serbian Soil Taxonomy. Soil texture is clay-loam, moderate in organic matter, and total nitrogen (N TOT ) (1.97% and 0.17%, resp.); soil pH in 0.01 M KCl was 6.67 and no soluble salt problem in 0-30 cm soil depth. The contents of P 2 O 5 and K 2 O in this soil depth were high, that is, 330 mg kg −1 and 350 mg kg −1 , respectively. Twenty representative trees within each replicate were selected for sampling and data collection. The four replicates were arranged in a randomized completely block design. Fruit Physical and Chemical Measurements. For each cultivar, 25 homogeneous fruits in four replications were harvested at four ripening stages (S 1 to S 4 ), according to fruit color and size. Sour cherry fruits at S 1 were in the last growth phase having a light red color, while at S 4 they have reached the commercial ripening stage to be harvested. In S 4 -ripening stage, fruits have dark-red skin color. Thus, S 1 -S 4 corresponded to S 11 -S 14 stages described for cherry fruit growth and ripening on tree by Serrano et al. [15]. The fruit weight (FW) for each fruit during maturation was determined using a digital balance (Tehnica ET-1111, Iskra, Slovenia) to an accuracy of 0.01 g, and results were the mean ± SE (g). For each sour cherry fruit in last stage (S 4 ), three linear dimensions, length (L), width (W), and thickness (T), were measured by using a digital caliper gauge with a sensitivity of 0.001 cm. The measurement of L was made on the polar axis of fruit, that is, between the apex and stem. The arithmetic mean diameter (D a ), geometric mean diameter (D g ), fruit volume (FV), sphericity (φ), and surface area (S) was calculated by Mohsenin [16]. The aspect ratio (R a ) was calculated by Maduako and Faborode [17]. Figure 1: Fruit weight evolution from S 1 -to S 4 -ripening stage in two sour cherry cultivars. Data are mean ± SE. Different capital letters represent differences between stages for the same cultivar, and lower-case letters represent differences between cultivars within the same stage at 0.05 probability level (LSD test). Fruit chemical composition measured from S 1 to S 4 in triplicate. Soluble solids content (SSC) was determined by Milwaukee MR 200 hand digital refractometer (ATC, Rocky Mount, USA) at 20 • C ( • Brix). Titratable acidity (TA), as malic acid (%), was determined by titration to pH 8.1 with N/10 NaOH. On the basis of the measured data, soluble solids/titratable acidity ratio (SS/TA ratio or ripening index-RI) was calculated. The pH was determined by a Cyber Scan 510 pH meter (Nijkerk, Netherlands). All results expressed as the mean ± SE for each chemical properties. Figure 1 indicated that FW of "Oblačinska" continuously increased during ripening period from S 1 to S 4 stage. However, FW in S 3 and S 4 stage was significantly higher than FW in S 1 and S 2 . The FW of "Cigančica" also increased from S 1 to S 3 stage, and decreased from S 3 to S 4 ( Figure 1). In this case, FW in S 2 and S 3 was higher than FW in S 1 . Differences between cultivars within the same stage were significant in S 1 and in S 4 , that is, "Oblačinska" showed a larger fruit than "Cigančica" in S 1 and S 4 stages (Figure 1). Other differences were insignificant. At the commercial ripening stage (S 4 ), the values were 3.48 ± 0.11 g for "Oblačinska" and 2.66 ± 0.09 g for "Cigančica." This seems to represent a major advantage for the growers. Similar tendency for fruit weight changes during ripening observed in a previous study on cherry cultivars [14]. Also, earlier work reported a high variability among cherry cultivars regarding this parameter [4,11,12,14,18]. According to Nikolić et al. [19], fruit weight in different clones of "Oblačinska" under Belgrade (Serbia) climatic conditions varied between 2.97 g and 5.01 g, whereas Khorshidi and Davarynejad [20] reported that fruit weight of "Cigančica" in Iranian conditions was 2.81 g. Obtained results concur with the findings of above authors. Thus, our results seem to indicate that fruit weight has a strong genetic influence, accordingly with Papp et al. [18]. Also, this property may be useful in the separation and transportation of the cherry fruit by hydrodynamic means [21]. In relation fruit dimensions (L, W, T), D a , D g , and FV in last (final) stage (S 4 ), "Oblačinska" had significantly higher above values than "Cigančica", whereas φ values were similar in both cultivars (Table 1). In contrast, R a was higher in "Cigančica" than in "Oblačinska". Ansari and Davarynejad [6] reported that some fruit physical properties of "Cigančica" such as fruit shape index, fruit diameter, and fruit length caused by pollination treatment, and varied from 0.85 to 1.00, 18.50 mm to 20.29 mm, and 16.23 mm to 20.21 mm, respectively. Our range values were lower than those obtained by above the authors. The reasons for these differences in fruit dimensions could be due to differences in the growing areas, as previously obtained [1,5,8]. The importance of these axial dimensions in determining aperture size of machines, particularly in separation of materials, has been discussed by Mohsenin [16]. These dimensions may be useful in estimating the size of machine components, and it may be useful in estimating the number of fruits to be engaged at a time, the spacing of slicing discs and number of slices expected from an average fruit [22]. Sour cherry fruits have several unique characteristics that set them apart from engineering materials. These properties determine the quality of the fruit, and identification of correlations between changes in these properties makes quality control easier [21]. According to the above authors, knowledge of shape and physical dimensions are important in screening solids to separate foreign materials and in sorting and sizing of cherry fruit. Evaluation of Chemical Properties. The SSC significantly increased from S 1 to S 4 in both sour cherry cultivars, with values at S 5 ranging from 16.25 ± 0.39 • Brix in "Cigančica" to 16.80±0.32 • Brix in the "Oblačinska" (Figure 2(a)). In the S 1 , S 2 , and S 3 ripening stages, "Cigančica" had higher SSC than , ripening index (c), and pH (d) evolution from S 1 -to S 4 -ripening stage in two sour cherry cultivars. Data are mean ± SE. Different capital letters represent differences between stages for the same cultivar, and lower-case letters represent differences between cultivars within the same stage at 0.05 probability level (LSD test). "Oblačinska", whereas, in the last stage (S 4 ), "Oblačinska" had higher content of this chemical. In an earlier study of Nikolić et al. [19], soluble solids in different clones of "Oblačinska" varied from 14.07% to 19.06, while in a previous work of Ansari and Davarynejad [6] for "Cigančica", soluble solids ranged between 14.43% and 16.24%, and/or 4 ISRN Agronomy 18.1 • Brix according to Khorshidi and Davarynejad [20]. Similar tendency for the accumulation of SSC in sweet cherry fruit during ripening has been reported [14,15]. Generally, our range values for final SSC and its tendency over ripening process are in a good agreement with results obtained by the above authors. Significant differences were also found in TA between sour cherry cultivars (Figure 2(b)). Thus, the highest value, harvested at commercial ripening stage (S 4 ), was found in fruit of "Cigančica" (0.81 ± 0.02 • Brix), and the lowest in "Oblačinska" (0.76 ± 0.01 • Brix). Additionally, TA significantly decreased from S 1 to S 4 ripening stage in both cultivars, and differences between cultivars within the same stage were not significant, except in S 4 as previously described [12,14]. According to Nikolić et al. [19], titratable acidity in "Oblačinska" ranged from 1.12% to 1.54%. In fruit of "Cigančica" grown under Iranian conditions, the TA varied from 1.57% to 1.89% [6] or about 2.38 g 100 mL −1 [20]. Our results are much lower than previously reported in the literature [6,19]. The reasons for these differences in TA could be due to differences in the growing areas [9,11]. Also, the great genotypic variability for TA in sour cherry fruit was observed in other study [5,18]. Data presented in Figure 2(c) shows that in both sour cherry cultivars during ripening period, soluble solids/titratable ratio or ripening index (RI) significantly increased for ∼54% in "Oblačinska" and for ∼59% in "Cigančica", which is in agreement with previous study in cherry [12,14,15]. The lower RI in fruit harvested at commercial ripening stage (S 4 ) was recorded in "Oblačinska" (22.10 ± 0.33), and higher in "Cigančica" (22.89 ± 0.41). Our results are much higher than in the literature [6,19,20]. Thus, Ansari and Davarynejad [6] reported that "Cigančica" cultivar was not good from the viewpoint of SSC/TA ratio when compared with other Hungarian sour cherry cultivars. However, many authors reported that cultivar, climatic conditions, environmental factors, harvest, and maturity stage can affect the chemical properties in cherries [8,9,11]. On the basis of all the above results, we could say that these sour cherry cultivars grown under Cacak pedoclimatic conditions were suitable for processing and also for fresh consumption [5,23]. Fruit pH significantly increased along the ripening process on tree (from S 1 to S 4 ) in both sour cherry cultivars, with the highest value being reached in "Oblačinska" (3.28 ± 0.06) and the lowest in "Cigančica" (3.10 ± 0.03) (Figure 2(d)). Differences between cultivars within the same stage were insignificant in early stages (S 1 and S 2 ), whereas differences in S 3 -and S 4 -ripening stage were significant. According to Khorshidi and Davarynejad [20], pH in fruit of "Cigančica" was 3.09, which confirmed our results. Also, previous work in sour cherry also reported a high variability among cultivars regarding pH [18]. Conclusion As final conclusions, the knowledge of fruit physical and chemical quality attributes of the sour cherry cultivars studied here could be useful to choose the appropriate ones to be grown under Western Serbian climatic conditions or used as parentals in future breeding programs. In this sense, "Oblačinska" had higher fruit weight, fruit dimensions, arithmetic mean diameter, geometric mean diameter, fruit volume, surface area, and best chemical attributes, while "Cigančica" was the better cultivar in terms of higher aspect ratio. Sphericity was similar in both cultivars.
2019-03-10T13:05:46.333Z
2012-01-01T00:00:00.000
{ "year": 2012, "sha1": "8392cc7c64c4b9632f0179a75035325ffbcc8584", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/archive/2012/593981.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "0144bb36268d77563299f75e3950a510e349c3f3", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Mathematics" ] }
13291691
pes2o/s2orc
v3-fos-license
Using exercises to improve public health preparedness in Asia, the Middle East and Africa Background Exercises are increasingly common tools used by the health sector and other sectors to evaluate their preparedness to respond to public health threats. Exercises provide an opportunity for multiple sectors to practice, test and evaluate their response to all types of public health emergencies. The information from these exercises can be used to refine and improve preparedness plans. There is a growing body of literature about the use of exercises among local, state and federal public health agencies in the United States. There is much less information about the use of exercises among public health agencies in other countries and the use of exercises that involve multiple countries. Results We developed and conducted 12 exercises (four sub-national, five national, three sub-regional) from August 2006 through December 2008. These 12 exercises included 558 participants (average 47) and 137 observers (average 11) from 14 countries. Participants consistently rated the overall quality of the exercises as very good or excellent. They rated the exercises lowest on their ability to identifying key gaps in performance. The vast majority of participants noted that they would use the information they gained at the exercise to improve their organization’s preparedness to respond to an influenza pandemic. Participants felt the exercises were particularly good at raising awareness and understanding about public health threats, assisting in evaluating plans and identifying priorities for improvement, and building relationships that strengthen preparedness and response across sectors and across countries. Participants left the exercises with specific ideas about the most important actions that they should engage in after the exercise such as improved planning coordination across sectors and countries and better training of health workers and response personnel. Conclusions These experiences suggest that exercises can be a valuable, low-burden tool to improve emergency preparedness and response in countries around the world. They also demonstrate that countries can work together to develop and conduct successful exercises designed to improve regional preparedness to public health threats. The development of standardized evaluation methods for exercises may be an additional tool to help focus the actions to be taken as a result of the exercise and to improve future exercises. Exercises show great promise as tools to improve public health preparedness across sectors and countries. Background Since 2001, there has been a dramatic increase in the use of disaster preparedness exercises among public health agencies in the United States [1,2]. These exercises have explored a wide range of topics from foodborne toxoplasmosis outbreaks [3], chemical disasters [4], acute blood shortages [5], bioterrorism [6], and severe acute respiratory syndrome (SARS) [7]. Exercises have been designed to assess and improve a variety of capabilities such as regional disaster preparedness among rural hospitals [8], knowledge and confidence of legal authorities [9], resource allocation [10] and risk communications [11]. A large number of these exercises have been focused on the spread of infectious diseases especially the threat of pandemic influenza because the common challenges pandemic influenza shares with other types of public health emergencies [12][13][14][15][16]. Our knowledge of the use of exercises for public health-related disaster preparedness outside the United States is much more limited. A considerable number of these types of exercises in the United States have been published in the academic literature but few findings from exercises that have taken place outside the United States have been published. Some researchers in the United States have tried to solve this gap by publishing the findings of "virtual" internet-based, long distance exercises conducted remotely with international partners [17,18]. Even less is known from direct experiences with in-country exercises or exercises that span multiple countries in a given region. The results of these types of exercises may be reported directly to exercise participants but often don't make it to the scientific literature. If the results from exercises are published in any systematic way, they often get published in in-house publications for domestic audiences rather than scientific journals with a more global reach. The incentives, financial or otherwise, for researchers to turn these in-house publications into scientific papers are limited. This is a major loss to our knowledge base because countries around the world are increasingly recognizing the importance of transnational efforts to complement national efforts to detect and respond to public health threats quickly and effectively [19][20][21][22][23]. Exercises provide these countries with a vehicle to collaborate and test their ability to respond to these transnational threats. Exercises also provide these countries with an avenue to build relationships and trust among colleagues across sectors and across borders [24]. Methods We developed pandemic influenza tabletop exercises that built on the "Day After" methodology developed by Millot, Molander and Wilson [25] and described elsewhere in greater detail [1]. Countries in three different geographic regions participated: Southeast Asia (Cambodia, China, Lao PDR, Myanmar, Thailand and Vietnam), the Middle East (Israel, Jordan and Palestine) and East Africa (Burundi, Kenya, Rwanda, Tanzania and Uganda). Countries that participated in the exercises were included because they were all part of sub-regional disease surveillance networks established in part through funding from the Rockefeller Foundation. Some countries not included in these networks were invited to observe the exercises. Countries ranged in their past experience with preparedness exercises with some countries having extensive past exercise experience (such as Israel, Vietnam, China and Thailand) and other countries having minimal past exercise experience (such as Cambodia, Lao PDR, Myanmar and Uganda). Exercises were developed and conducted by exercise planning teams that included external exercise development experts from the RAND Corporation as well as senior health leaders from each of the respective localities and/or countries represented in the exercise. There were three different levels of exercises: sub-national (e.g., one or more provincial areas), national (e.g., one country) and sub-regional (e.g., multiple countries from one geographic region). All exercises were multi-sectorial in nature meaning that they involved representatives from more than one sector of government. Examples of sectors included were health, agriculture, defense and environment. Each exercise focused on three to six different broad topic areas such as surveillance and information sharing, disease control, and communications that were identified in previous exercises as important [2,12]. Because Thailand had considerable previous experience with exercises, it designed and conducted its sub-national exercise with limited involvement from representatives at RAND. Exercise discussions focused on one topic area at a time each lasting from 30 to 90 minutes. Participants of exercises were selected by the exercise planning team and differed from exercise to exercise, but all exercises included representatives from the health sector of the locality and/or country represented. They also included senior leaders from at least one other non-health sector. In addition to participants, exercises also had "observers" who were invited to watch the exercise but did not directly engage in exercise discussions. All exercises were led by one or two exercise "facilitators" who directed the exercise discussion and probed participants for more information. In general, exercise facilitators can represent a range of disciplines from media professionals to health professionals. In these exercises the facilitators were all health officials or health researchers who were trained in the facilitation of exercises and who had extensive experience facilitating past exercises. Exercises presented participants with a future scenario that involved an unfolding pandemic influenza crisis at different stages. They were required to respond to the scenario with the actions they would take if the scenario were actually occurring. Exercise facilitators were given discussion points and probes to keep the discussion focused and moving forward. Each section of the exercise ended with participants being asked to make concrete decisions for the topic area being discussed before moving on. The exercise concluded with a debriefing in which all participants evaluated their own response in light of what they learned during the exercise. All exercise participants were asked to complete an evaluation form immediately after the exercise, before leaving the room. Typically, they spent about 15 minutes to respond to these questions. Six of the exercises were rated for their quality through five Likert scale questions: the overall quality of the exercise (1 = poor; 5 = excellent); the quality of the exercise discussions (1 = poor; 5 = excellent); the exercise identified important key gaps in preparedness (1 = strongly disagree; 5 = strongly agree); the exercise helped participants to better understand the roles and responsibilities of agencies and organizations responding to an influenza pandemic (1 = strongly disagree; 5 = strongly agree); and the exercise generated information that participants planned to use (1 = strongly disagree; 5 = strongly agree). The remaining six exercises asked participants three different qualitative questions: what was the importance of the exercise; what are the most important actions that should be taken based on the exercise; and what suggestions do you have to help improve future exercises. In addition to participant evaluations, detailed After Action Reports (AARs) were developed for each exercise that summarized the exercise discussions and highlighted key aspects of each exercise. In January 2013, health leaders who were involved in the planning of the exercises from a subset of countries participated in brief semistructured face-to-face interviews to discuss how their country followed up with the exercises and the current state of their exercise program. Health leaders included health officials working for the ministry of health in their respective countries. These health leaders were all directors of departments (such as communicable disease) within their ministry. Results and discussion We developed and conducted 12 exercises from August 2006 through December 2008: four sub-national exercises, five national exercises, and three sub-regional exercises (Table 1). Across all of these exercises there were a total of 558 participants and 137 observers from 14 countries. The average number of participants per exercise was 47 and the average number of observers was 11. Participants from the health sector were represented in every exercise. The most commonly represented sectors other than health were agriculture and defense. Four exercises were shorter than one full 8-hour day in length, three exercises were one full day in length, and five exercises were more than one full day in length. The average length of the exercises was 9.75 hours. All exercises covered three to six of the topic areas outlined in Table 2. Table 3 highlights the participant evaluation from six exercises that used questionnaires with Likert Scale questions. Participants who completed these evaluation forms consistently rated the overall quality of the exercises as high (88-100% rating the exercise as good or excellent) with one exception (Middle East sub-regional exercise 59% rating the exercise as good or excellent). Participants also consistently rated the exercises highly for helping them to understand the roles and responsibilities of organizations and agencies responding to an influenza pandemic (91-94% rating the exercise as good or excellent in this area) with one exception (China subregional exercise 76% rated the exercise as good or excellent in this area). Participants differed on what they felt about the quality of the information shared in the exercises (67%-93% rated the information as good or excellent). Participants rated the exercises lowest on their ability to identify key gaps in performance (50%-73% Third, the limited time frame to conduct tabletop exercises and the limited number of topics that can be discussed in that time frame may inhibit the ability of these exercises to identify a significant number of key gaps. A fourth possibility is that cultural sensitivities in some of these countries may have limited participants' comfort in identifying gaps in their government's preparedness system. The exercises were most successful at helping participants gain knowledge that they planned to use to improve the preparedness of their organization (82%-100% agreeing or strongly agreeing that they would use what they learned from the exercise). Table 4 summarizes the qualitative feedback provided by participants in their evaluation forms. Three general themes came out of the participant comments on the most useful aspects of the exercises: the ability of exercises to raise awareness and understanding about public health threats, the ability of the exercises to assist in evaluating plans and identifying priorities for improvement and the ability of the exercises to build relationships and enhance preparedness and response capabilities across sectors and across countries in a geographic region. Participants also left the exercises with specific ideas about the most important follow-up actions that they should take in the near future. Specifically, participants identified better planning, improved planning coordination across sectors and countries and better training of health workers and response personnel. Finally participants provided feedback on the use of tabletop exercises for pandemic influenza preparedness. No participants stated that they felt the exercises involved too many sectors. In fact, many participants reported that they felt more sectors should be involved and that exercises should also involve more private sector partners and more partners from NGOs. Participants also felt that more could be done in the exercises to ground theoretical responses with more practical responses. Some health leaders who were part of exercise planning teams participated in semi-structured face-to-face interviews in January 2013. Countries that reported having pre-existing exercise programs prior to participating in the exercises described here were much more likely to report conducting exercises at regular intervals over time compared to countries that did not report a pre-existing exercise program. Most countries reported modifying and using some or all of the exercise template materials that were developed for the exercises described here. However, one country that had no prior exercise experience organized and carried out numerous sub-national exercises on their own after participating in the national and subregional exercise. Health leaders in this country reported that participating in an exercise helped to motivate them to develop an exercise program and regularly assess different aspects of their public health preparedness. The largest barriers to continued exercising that were reported included lack of financial resources and limited support among leadership to develop and sustain an exercise program. Conclusions These experiences suggest that exercises can be a valuable, low-burden tool to improve emergency preparedness and response in countries around the world. They also demonstrate that countries can work together to develop and conduct successful exercises designed to improve regional preparedness to public health threats. Regular participation in exercises is associated with improved overall response to public health threats [26]. Countries that participated in sub-regional exercises [24,27]. But exercises are not perfect. Research has called into question the ability of exercises to adequately expose operational and logistical gaps [28]. This is consistent with our finding that exercise participants rated the exercises lowest for identifying key gaps. In addition, there is a lack of consensus on what makes exercises effective tools to assess public health preparedness and how the outputs of exercises such as AARs can be used to support and improve public health preparedness efforts [29]. Thus, the development of standardized evaluation methods for exercises may be an additional tool to help focus the actions to be taken as a result of the exercise and to improve future exercises. Despite these flaws, exercises show great promise as tools to build relationships, assess performance and improve collaborative planning for public health threats across multiple sectors and multiple countries over time.
2016-05-12T22:15:10.714Z
2014-07-27T00:00:00.000
{ "year": 2014, "sha1": "37142e322bc30493a97e2aff05533919897e39ef", "oa_license": "CCBY", "oa_url": "https://bmcresnotes.biomedcentral.com/track/pdf/10.1186/1756-0500-7-474", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "37142e322bc30493a97e2aff05533919897e39ef", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
126360259
pes2o/s2orc
v3-fos-license
Error analysis of mathematical problems on TIMSS: A case of Indonesian secondary students Indonesian students’ competence in solving mathematical problems is still considered as weak. It was pointed out by the results of international assessment such as TIMSS. This might be caused by various types of errors made. Hence, this study aimed at identifying students’ errors in solving mathematical problems in TIMSS in the topic of numbers that considered as the fundamental concept in Mathematics. This study applied descriptive qualitative analysis. The subject was three students with most errors in the test indicators who were taken from 34 students of 8th graders. Data was obtained through paper and pencil test and student’s’ interview. The error analysis indicated that in solving Applying level problem, the type of error that students made was operational errors. In addition, for reasoning level problem, there are three types of errors made such as conceptual errors, operational errors and principal errors. Meanwhile, analysis of the causes of students’ errors showed that students did not comprehend the mathematical problems given. Introduction There is an extensive literature on error analysis in mathematics problem-solving. The main concern is to improve students' learning and understanding of Mathematics. Several researches on error analysis on mathematics problem solving were done such as in the topic of the fraction. Abdullah et.al [1] shared several errors in solving HOTS problems such as comprehension error, transformation error, process skill error, and encoding error. Problem-solving is an activity that involves various actions in the mind of thought including accessing and using knowledge and experience [2]. Thus Polya in 1973 [3], teaching strategies that involve the use of non-routine problems in the classroom give students the opportunity to develop higher order thinking skills in the process of understanding, exploration, and application of mathematical concepts. Since problem-solving involves the activities of processing, the knowledge used in the process of solving the problems is different. As stated by Mayer [4], the knowledge covers such as knowledge of language and facts, knowledge of schemes, knowledge of algorithms, and strategic knowledge. In short, students need to equip themselves with various knowledge and high skills in problem-solving. Quoting Widiharto in 2008 [5], mathematical concepts and skills that are not fully mastered by students have led to difficulties and errors in solving mathematical problems. A study conducted by Susanti et al. [6] found out that students are difficult to solve problems that involve the use of HOTS and among the difficulties faced by them are reading and interpreting data, determining and delegating data, and making conclusions and arguments. Herholdt and Sapire in 2014 [7] stated that error analysis, also referred to error pattern analysis, is the study of errors in learners' work with a view to finding explanations for these reasoning errors. Not all errors can be attributed to reasoning faults; some are simply careless errors [8], identified as "slips" from Olivier [9], which can easily be corrected if the faulty process is pointed out to the learner. Slips are random errors in declarative or procedural knowledge, which do not indicate systematic misconceptions or conceptual problems [10]. Error analysis is concerned with the pervasive errors (or 'bugs') which learners make, based on their lack of conceptual or procedural understanding (Ketterlin-Geller & Yovanoff 2009). As a result, the performance of Indonesian students in the international assessments that test students' thinking skills, namely Trends in International Mathematics and Science Studies (TIMSS) and Programme for International Student Assessment (PISA) was not at the satisfactory level. Based on the data of TIMSS 2011, Indonesian students' mathematics achievement was in the ranked 38 out of 42 countries which followed the TIMSS with a score of 386 of the benchmark score of 500 [11]. Generally, one of the causes was that they were lack of knowledge in resolving TIMSS problems [12]. There are two domains that are being tested in TIMSS assessments, namely content and cognitive domains. For the tested cognitive domain, it covers applying, analyzing, and reasoning, which the components of higher-order thinking are in Bloom's Taxonomy. Meanwhile, the tested content domain covers four areas of mathematics learning namely Numbers, Geometry, Algebra, Data, and Probability. The number is one of the fundamental subjects that is tested in TIMSS which percentages is 30% becomes the greatest percentages in the content domain on TIMSS (Mullis et al., 2013). The concept of number is connected and as the basis and prerequisite for the understanding of the next concept [13]. Therefore, it is important to know the errors of number problems on TIMSS. This study was carried out to identify students' errors in solving mathematics problem in TIMSS in the topic of numbers which considered as the fundamental concept in Mathematics. Methodology This study is a descriptive study that used a qualitative approach. The subject was three students with most errors in the test indicators who were taken from 34 students of 8 th graders in a secondary school of the districts Sidoarjo, East Java. Data was obtained through paper and pencil test students' interview. The instrument used for this study was a set of test questions to identify the types of students' errors. The items contained in the instrument have been adopted with the questions of international assessments of Trends in International Mathematics and Science Study (TIMSS). Four items were built in a subjective form, and the items contained the elements of Applying and Reasoning level problems as shown in Table 1. The subjects of the study were required to answer the questions contained in the instruments that have been prepared under the supervision of individual mathematics teacher who taught the respective class. The time allocated to answer questions was 60 minutes. The errors were analyzed using three types of errors that are conceptual errors, operational errors, and principal errors. Penny had a bag of marbles. She gave one-third of them to Rebecca, and then one-fourth of the remaining marbles to John. Penny then had 24 marbles left in the bag. How many marbles were in the bag to start with? Applying 2 P and Q represent two fractions on the number line above. P × Q = N. Shows the location of N on the number line? Place the four digits 3, 5, 7, and 9 into the boxes below in the positions that would give the greatest result when the two numbers are multiplied. 4 John and Cathy were told to divide a number by 100. By mistake, John multiplied the number by 100 and obtained an answer of 450. Cathy correctly divided the number by 100. What was her answer? Analysis of the types of errors This section discusses the errors made by the students. There are three types of students' errors; they are conceptual error, operational error, and principal error. Conceptual error. Conceptual error occurs when the students are not able to apply the concept of number. This error recorded on Reasoning level problems only. The student was not able to multiply two fractions and to determine the value of P and Q. He considered that the way to multiply two fractions is to multiply across that is multiplying the numerator of the first fraction with a denominator of the second fraction and the numerator of the 4 1234567890''"" The Consortium of Asia-Pacific Education Universities (CAPEU) IOP Publishing IOP Conf. Series: Materials Science and Engineering 296 (2018) 012010 doi:10.1088/1757-899X/296/1/012010 second fraction with the denominator of the first fraction, in which he failed to solve the following problem. He made a conceptual error such as multiply two fractions. From the interview, students were not able to understand the way of multiplying two fractions and to determine what value of the number line. Figure 2. Example of conceptual error The student was not able to identify the procedure and solve the problem. She considered that item 2 could be solved by algebra operation such as multiplication algebra. She thought whatever problem that contained alphabet must be solved by algebra solution. The interview showed that this error caused by the subject who did not comprehend the mathematical problem given. Operational Error. Operational error occurs when the students are not able to calculate numbers. This error recorded on Applying and Reasoning level problems • For applying level problem Question: John and Cathy were told to divide a number by 100. By mistake, John multiplied the number by 100 and obtained an answer of 450. Cathy correctly divided the number by 100. What was her answer? The student was not able to compute the operation. The subject considered that 45 × 100 = 450, in which he failed to solve the following problem. He made an operational error such as on multiplying. The interview showed that this error caused because the subject was careless. • For reasoning level problem Question: P and Q represent two fractions on the number line above. P × Q = N. Shows the location of N on the number line? The student was not able to compute the operation. The subject considered that in which he failed to solve the following problem. He made an operational error such as on multiplying two fractions. The interview showed that this error caused because the subject was not able to understand to compute two fractions. Question: Place the four digits 3, 5, 7, and 9 into the boxes below in the positions that would give the greatest result when the two numbers are multiplied. Figure 5. Example of operational error in reasoning level problem The student was not able to compute the operation. She made an operational error such as on multiplying whole numbers. The interview showed that this error caused because the subject was careless. Principal Error. Principal error occurs when the students are not able to answer the final answer that is caused by the previous error in the problem-solving. This error recorded on Reasoning level problems only. Question: Place the four digits 4, 6, 8, and 9 into the boxes below in the positions that would give the greatest result when the two numbers are multiplied. The student was not able to determine the final answer. She made a major error. The interview showed that this error caused because the subject was not able to determine the suitable numbers that were produced greatest value if they were multiplied. Conclusion As conclusion, it was found that students tend to make three types of errors; they were conceptual error, operational error, and principal error. Some of conceptual errors were translating word problem to mathematics problem; using multiplication fraction, and determining fraction's order in number line. The operational error that made by students was the subject could not use the operation correctly. For the principal error made by students was determining final solution due to the previous errors.
2019-04-22T13:10:57.778Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "58075f166222e456aed80edf05083971e9a8808e", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/296/1/012010", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "f1d9649f7ff1e0b48de85e8abdcfcab6e8de36ce", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Engineering", "Physics" ] }
264452766
pes2o/s2orc
v3-fos-license
BIANCHI TYPE V UNIVERSE WITH TIME VARYING COSMOLOGICAL CONSTANT AND QUADRATIC EQUATION OF STATE IN 𝒇(𝑹, 𝑻) THEORY OF GRAVITY In recent years, modified theories of gravity have been extensively studied because of the discovery and confirmation of the current phase of accelerated expansion of the universe. The 𝑓(𝑅, 𝑇) theory of gravity is one such theory, proposed by Harko et al . in 2011, in which 𝑅 is the Ricci scalar and 𝑇 is the trace of the stress-energy tensor. In this paper, we study Bianchi type V universe in 𝑓(𝑅, 𝑇) theory of gravity with time varying cosmological constant and a quadratic equation of state 𝑝 = 𝛼𝜌 (cid:2870) − 𝜌 , where 𝛼 ≠ 0 is a constant. We obtain exact solutions of the field equations for two cases: one with a volumetric expansion law and the other with an exponential expansion law. The physical features of the two models are discussed by examining the behavior of some important cosmological parameters such as the Hubble parameter, the deceleration parameter etc. We find that the models have initial singularity and the physical parameters diverge at the initial epoch. The model 1, corresponding to the volumetric expansion law does not resemble ΛCDM model while the model 2, corresponding to the exponential expansion law, resembles ΛCDM model. The energy conditions of the models are also examined and found to be consistent with recent cosmological observations INTRODUCTION Various astrophysical and cosmological observations like type Ia supernovae [1][2][3], Cosmic Microwave Background (CMB) [4,5], Large Scale Structure (LSS) [6,7] and other improved measurements of supernovae conforms the discovery of the late-time cosmic acceleration although it is yet to be ascertained what led to the start of this acceleration.According to the recent Planck collaboration results [8], it is found that about 95% of the total constituent of the universe is mysterious.Within the framework of General Relativity, the observed cosmic acceleration can be attributed to an exotic component of the universe with large negative pressure which contributes nearly 68% of the total energy content of the universe.This unknown energy fluid, supposed to be responsible for the late-time cosmic acceleration, is given the name dark energy.In literature, several dark energy candidates like quintessence [9,10], k-essence [11], tachyon [12], phantom [13], Chaplygin gas [14], Holographic dark energy [15] etc. have been proposed and studied in various cosmological background.It is seen that even though the hypothetical dark energy can smoothly explain the accelerated expansion of the universe, many dark energy models encounter with problems when tested by some old red-shift objects [16,17].Therefore, the other way considered to explain the cosmic acceleration is modifications of Einstein's theory of gravitation.Some of the most studied modifications of Einstein's General theory of Relativity are the () theory of gravity [18,19], (T) gravity [20], (, ) theory of gravity [21], () gravity [22] etc.In the (, ) theory of gravity, the gravitational Lagrangian in the Einstein-Hilbert action is modified by replacing the Ricci scalar by an arbitrary function (, ) of and the trace of the stress-energy tensor.Harko et al. [21] have derived the gravitational field equations of this theory in the metric formalism, as well as the equations of motion for test particles, which follow from the covariant divergence of the stress-energy tensor.They have also presented the field equations corresponding to the homogeneous and isotropic FRW metric and provided a number of specific cosmological models that correspond to some explicit forms of the function (, ) such as (, ) = + 2() , (, ) = () + () , (, ) = () + () ().Since then many researchers have studied various isotropic and anisotropic cosmological models in different contexts within this framework of modified theory of gravity. In literature, various homogeneous and anisotropic cosmological models such as the Bianchi type models are studied in the context of dark energy as well as in alternative or modified theories of gravity.Homogeneous and anisotropic models of the universe are becoming more and more popular because of the anomalies found in the observations like Cosmic Microwave Background (CMB) and Large-Scale Structure [23,24].Also, models that are spatially homogeneous and anisotropic are helpful in describing the evolution of the early stages of the universe.Bianchi type V models are significant because they include the space of constant negative curvature as a special case. In this paper, we study a spatially homogeneous and anisotropic Bianchi type V universe with a time dependent cosmological constant Λ and a quadratic equation of state = − [25], where ≠ 0 is a constant within the framework of (, ) theory of gravity.In Sect.2, we provide basic field equations of the (, ) theory of gravity for the functional form (, ) = + 2().In Sect.3, we obtain explicit field equations corresponding to Bianchi type V metric for(, ) = + 2() = + 2, where is a constant.The expressions for the directional scale factors , , in terms of the average scale factor are also obtained.In Sect.4, we find exact solutions of the field equations for two cases: one with a volumetric expansion law and the other with an exponential expansion law.Evolutions of some relevant cosmological parameters are investigated in Sect.5, and physical and geometrical properties of the models are discussed.We conclude the paper in Sect.6. BASIC FIELD EQUATIONS OF THE 𝒇(𝑹, 𝑻)THEORY OF GRAVITY The gravitational Lagrangian in (, )theory of gravity, proposed by Harko et al. [21], is given by an arbitrary function (, ) of the Ricci scalar and of the trace of the stress-energy tensor .The field equations of this theory are derived by varying the action with respect to the metric tensor , where is the matter Lagrangian density. The stress-energy tensor of matter is defined as Assuming the matter Lagrangian density to depend only on the metric tensor components , and not on its derivatives, can be obtained as Hence, the variation of ( 1) with respect to the metric tensor provides the field equations of the (, ) theory of gravity as where (, ) = ( , ) , (, ) = ( , ) ,∇ is the covariant derivative with respect to the symmetric connection Γ associated to the metric and Since there is no unique definition of the matter Lagrangian density , therefore, by assuming the stress-energy tensor of matter to be given by the stress-energy tensor of a perfect fluid of density and pressure in the form where the four velocity satisfies the conditions ∇ = 0 and = 1, the matter Lagrangian density can be taken as = −.Then from Eq. ( 5), we obtain And for the functional form where () is an arbitrary function of the trace of the stress-energy tensor of matter, the gravitational field equations, from Eq (4) are obtained as where the prime denotes differentiation with respect to the argument.In view of Eq. ( 6), the Eq. ( 9) becomes METRIC AND FIELD EQUATIONS We consider a spatially homogeneous and anisotropic Bianchi type V metric in the form where , , are functions of the cosmic time only. Using comoving coordinates the field equations (10) for the metric (11) with a time dependent cosmological constant Λ and the functional, For the Bianchi type V metric given in Eq. ( 11), the various parameters of cosmological importance are: The spatial volume, The average scale factor, The mean Hubble parameter, The deceleration parameter, The expansion scalar, The shear scalar, The anisotropy parameter, where = , = , = are the directional Hubble parameters. Then from ( 25)-( 27), we obtain the directional scale factors as ) ) Now, to find exact solution of the field equations, we need one extra condition for which we consider a volumetric expansion law.We also find another exact solution by using the exponential expansion law. For volumetric expansion law, we consider where = = , and and are non-zero constants. PHYSICAL AND GEOMETRICAL PROPERTIES OF THE MODELS Model 1 The average Hubble parameter , the expansion scalar , the deceleration parameter and the shear scalar and the anisotropy parameter for the model corresponding to the volumetric expansion law are obtained as ) ) ) From equation (40), we see that the cosmic expansion accelerates for > 1.Now, adding equations ( 14) and ( 15 From the graphs we observe that the energy density is a decreasing function of cosmic time, pressure is negative throughout the evolution of the universe and the cosmological constant Λ decreases rapidly and tend to zero.The figure 4 shows that the universe is highly anisotropic at its early stage and the anisotropy dies out in the course of evolution. The Cosmic Jerk Parameter.The cosmic jerk parameter is defined as The equation ( 46) can be written in terms of the deceleration and the Hubble parameter as From equations (38) and (40), using (47), we get the cosmic jerk parameter for this model as At late times, the value of the cosmic jerk parameter is 1 for ΛCDM model.For this model, () = 1 for = .But we have a restriction > 1.Hence, this model does not resemble with ΛCDM model. Energy Conditions Weak Energy Condition (WEC), Null Energy Condition (NEC), Dominant Energy Condition (DEC) and Strong Energy Condition (SEC) are given by WEC : ≥ 0 NEC : + ≥ 0 DEC : − ≥ 0 SEC : + 3 ≥ 0 For this model, we have From figure 1 and figure 5, we see that the WEC and DEC are satisfied.NEC is satisfied only at late times while the SEC is violated for this model. Model 2 The average Hubble parameter , the expansion scalar , the deceleration parameter and the shear scalar and the anisotropy parameter for the model corresponding to the exponential expansion are obtained as 𝜃 = 3𝑛 ( 5 0 ) ) ) From the expression for the deceleration parameter , we see that the expansion of the universe is decelerating throughout the evolution and does not depend on .Now, adding ( 12) and ( 15) and using quadratic equation of state = − , where ≠ 0 is a constant, we get ) Using and in (12), we obtain From the figures 6, 7, 8 and 9, we see that the behavior of the energy density, pressure, cosmological constant and anisotropy parameter satisfies the present cosmological observations.However, in this case, the constant should assume negative values The Cosmic Jerk Parameter: From equations ( 49) and (51), using (47), we obtain the cosmic jerk parameter for this model as 𝑗(𝑡) = 1 This shows that this model resembles ΛCDM for any value of . Energy Conditions For this model, the energy conditions are obtained as From figures 6 and 10, we see that for this model, the WEC and DEC are satisfied and NEC and SEC are violated. CONCLUSION In this paper, we study a spatially homogeneous and anisotropic Bianchi type V universe with time varying cosmological constant and a quadratic equation of state in (, ) theory of gravity for the functional form (, ) = + 2, where is a constant.We construct two cosmological models corresponding to a volumetric power law expansion (Model 1) and an exponential expansion (Model 2).We find that  Both the models have initial singularity as the metric coefficients , and vanish at the initial moment. The physical parameters , , for both the models diverge at the initial epoch and for large , these parameters tend to 0. Also, the volume of the universe is zero at = 0 and increases exponentially with time .Hence, both the models start with the big bang singularity at = 0 and then expand throughout the evolution. The energy density of the model 1 increases at the beginning but it decreases in the course of evolution and tends to 0 at late time.The energy density of the model 2 decreases from the evolution of the universe and tends to 0 as time goes on. For both the models, the cosmological constant is a decreasing function of the cosmic time and tends to 0 at late time.ORCID IDs Manash Pratim Das, https://orcid.org/0000-0002-1179-8068  The model 1 exhibits accelerated expansion for > 1, while for model 2, it happens for any values of . The model 1 never approaches ΛCDM model while the model 2 resembles ΛCDM model for any values of . The model 1 satisfies present cosmological observations for positive values of while the model 2 satisfies the same for negative values of . For both the models, the energy conditions WEC and DEC are satisfied and NEC and SEC are violated.The violation of SEC shows that the universe has anti-gravitating effect which results accelerating expansion of the universe.
2023-10-26T15:18:34.202Z
2023-03-02T00:00:00.000
{ "year": 2023, "sha1": "6429995f82822b6171cec5dde13ff55aa5fe678c", "oa_license": "CCBY", "oa_url": "https://periodicals.karazin.ua/eejp/article/download/21083/19836", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "f00f2bf19e8b715b6363c3ac599ef979764c4ed2", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [] }
86438503
pes2o/s2orc
v3-fos-license
Pythagorean fuzzy set and its application in career placements based on academic performance using max–min–max composition Imprecision is an important factor in any decision-making process. Different tools and approaches have been introduced to handle the imprecise environment of group decision-making. One of the latest tools in dealing with imprecision is Pythagorean fuzzy sets. These sets generalize intuitionistic fuzzy sets with a wider scope of applications, and, thus, the motivation for investigating into its resourcefulness in tackling career placements problem. In this paper, we explore the concept of Pythagorean fuzzy sets and deduce some theorems in connection to score and accuracy functions. Some properties of Pythagorean fuzzy sets are outlined. The idea of relation is established in Pythagorean fuzzy set setting called Pythagorean fuzzy relation with numerical illustrations to validate the developed relation. Finally, a decision-making approach of career placements on the basis of academic performance is presented using the proposed Pythagorean fuzzy relation called max–min–max composition to ascertain the suitability of careers to applicants. The approach adopted in this paper is suggestible to solve the other multi-criteria decision-making problems or multi-attribute decision-making problems, respectively. Introduction Considering the imprecision in decision-making, Zadeh [1] introduced the idea of fuzzy set which has a membership function, μ that assigns to each element of the universe of discourse, a number from the unit interval [0, 1] to indicate the degree of belongingness to the set under consideration. The notion of fuzzy sets generalizes classical sets theory by allowing intermediate situations between the whole and nothing. In a fuzzy set, a membership function is defined to describe the degree of membership of an element to a class. The membership value ranges from 0 to 1, where 0 shows that the element does not belong to a class, 1 means belongs, and other values indicate the degree of membership to a class. For fuzzy sets, the membership function replaced the characteristic function in crisp sets. Albeit, the concept of fuzzy sets theory seems to be inconclusive because of the exclusion of nonmembership function and the disregard for the possibility of hesitation margin. Atanassov critically studied these shortcomings and proposed a concept called intuitionistic fuzzy sets (IFSs) [2][3][4][5]. The construct (that is, IFSs) incorporates both membership function, μ and nonmembership function, ν with hesitation margin, π (that is, neither membership nor nonmembership functions), such that μ + ν ≤ 1 and μ + ν + π = 1. Atanassov [6] introduced intuitionistic fuzzy sets of second type (IFSST) with the property that the sum of the square of the membership and nonmembership degrees is less than or equal to one. This concept generalizes IFSs in a way. The notion of IFSs provides a flexible framework to elaborate uncertainty and vagueness. The idea of IFS seems to be resourceful in modeling many real-life situations like medical diagnosis [7][8][9][10][11], career determination [12], selection process [13], and multi-criteria decision-making [14][15][16], among others. There are situations where μ + ν ≥ 1 unlike the cases capture in IFSs. This limitation in IFS naturally led to a construct, called Pythagorean fuzzy sets (PFSs). Pythagorean fuzzy set (PFS) proposed in [17][18][19] is a new tool to deal with vagueness considering the membership grade, μ and nonmembership grade, ν satisfying the conditions μ + ν ≤ 1 or μ + ν ≥ 1, and also, it follows that μ 2 + ν 2 + π 2 = 1, where π is the Pythagorean fuzzy set index. In fact, the ori-gin of Pythagorean fuzzy sets emanated from IFSST earlier studied in the literature. As a generalized set, PFS has close relationship with IFS. The construct of PFSs can be used to characterize uncertain information more sufficiently and accurately than IFS. Garg [20] presented an improved score function for the ranking order of interval-valued Pythagorean fuzzy sets (IVPFSs). Based on it, a Pythagorean fuzzy technique for order of preference by similarity to ideal solution (TOPSIS) method by taking the preferences of the experts in the form of interval-valued Pythagorean fuzzy decision matrices was discussed. Other explorations of the theory of PFSs can be found in [21][22][23][24][25][26][27]. Pythagorean fuzzy set has attracted great attentions of many researchers, and subsequently, the concept has been applied to many application areas such as decision-making, aggregation operators, and information measures. Rahman et al. [28] worked on some geometric aggregation operators on interval-valued PFSs (IVPFSs) and applied same to group decision-making problem. Perez-Dominguez [29] presented a multiobjective optimization on the basis of ratio analysis (MOORA) under PFS setting and applied it to MCDM problem. Liang and Xu [30] proposed the idea of PFSs in hesitant environment and its MCDM ability by employing TOPSIS using energy project selection model. Mohagheghi et al. [31] offered a novel last aggregation group decision-making process for the weight of decision-makers using PFSs. Rahman et al. [32] proposed some approaches to multi-attribute group decision-making based on induced interval-valued Pythagorean fuzzy Einstein aggregation operator. Garg [33,34] unveiled some new logarithmic operational laws and their aggregation operator for PFS with some applications and discussed a decision-making problem under Pythagorean fuzzy environment by proposing some generalized aggregation operators. Garg [35] proposed an improved score function for solving MCDM problem with partially known weight information, such that the preferences related to the criteria are taken in the form of interval-valued Pythagorean fuzzy sets. Garg [36,37] developed a new decision-making model with probabilistic information, using the concept of immediate probabilities to aggregate the information under the Pythagorean fuzzy set environment, and defined two new exponential operational laws about IVPFS and their corresponding aggregation operators with application to MCDM. Other applications of PFSs and IVPFSs, respectively, in conjunction to decision-making problems, especially in MCDM and MADM, have been studied in [38][39][40][41][42][43][44][45][46][47][48][49][50][51]. In this paper, we are motivated to investigate the resourcefulness of PFSs in tackling career placements problem via max-min-max rule because of its wider scope of applications in real-life problems imbedded with imprecision. The paper is aimed at exploring the notion of PFSs and its application to career placements on the basis of academic per-formance using max-min-max composition. To achieve this aim, we reiterate the concept of PFSs, outline some properties of PFSs, and deduce some theorems with respect to score and accuracy functions studied in the literature hitherto. The idea of Pythagorean fuzzy relation is proposed as an extension of fuzzy relation, as well as intuitionistic fuzzy relation introduced in [8,52], respectively. Conclusively, a new application of Pythagorean fuzzy sets is explicated in career placements on the basis of academic performance using the proposed relation. The rest of the paper are thus presented: Sect. 2 provides some preliminaries on fuzzy sets and IFSs as foundations to the idea of PFSs, while Sect. 3 covers the notion of PFSs with some theorems. We present Pythagorean fuzzy relation and its numerical verifications in Sect. 4. An application of Pythagorean fuzzy relation (maxmin-max composition) is supplied in a hypothetical case study in Sect. 5. Finally, Sect. 6 concludes the paper and provides direction for future studies. Preliminaries We recall some basic notions of fuzzy sets and IFSs. Fuzzy sets Definition 2.1 (See [1]) Let X be a nonempty set. A fuzzy set A in X is characterized by a membership function: That is: Alternatively, a fuzzy set A in X is an object having the form where the function defines the degree of membership of the element, x ∈ X . The closer the membership value μ A (x) to 1, the more x belongs to A, where the grades 1 and 0 represent full membership and full nonmembership. Fuzzy set is a collection of objects with graded membership, that is, having degree of membership. Fuzzy set is an extension of the classical notion of set. In classical set theory, the membership of elements in a set is assessed in binary terms according to a bivalent condition; an element either belongs or does not belong to the set. Classical bivalent sets are in fuzzy set theory called crisp sets. Fuzzy sets are generalized classical sets, since the indicator function of classical sets is special cases of the membership functions of fuzzy sets, if the latter only take values 0 or 1. Fuzzy sets theory permits the gradual assessment of the membership of element in a set; this is described with the aid of a membership function valued in the real unit interval [0, 1]. Let us consider two examples: (i) all employees of XY Z who are over 1.8 m in height; (ii) all employees of XY Z who are tall. The first example is a classical set with a universe (all XY Z employees) and a membership rule that divides the universe into members (those over 1.8 m) and nonmembers. The second example is a fuzzy set, because some employees are definitely in the set and some are definitely not in the set, but some are borderline. This distinction between the ins, the outs, and the borderline is made more exact by the membership function, μ. If we return to our second example and let A represent the fuzzy set of all tall employees and x represent a member of the universe Intuitionistic fuzzy sets Definition 2.2 (See [2][3][4][5]) Let a nonempty set X be fixed. An IFS A in X is an object having the form: where the functions define the degree of membership and the degree of nonmembership, respectively, of the element x ∈ X to A, which is a subset of X , and for every x ∈ X : For each A in X : is the intuitionistic fuzzy set index or hesitation margin of x in X . The hesitation margin π A (x) is the degree of nondeterminacy of x ∈ X , to the set A and π A (x) ∈ [0, 1]. The hesitation margin is the function that expresses lack of knowledge of whether x ∈ X or x / ∈ X . Thus: Example 2.1 Let X = {x, y, z} be a fixed universe of discourse and be the intuitionistic fuzzy set in X . The hesitation margins of the elements x, y, z to A are as follows: Construct of Pythagorean fuzzy sets Definition 3.1 (See [17][18][19]) Let X be a universal set. Then, a Pythagorean fuzzy set A, which is a set of ordered pairs over X , is defined by the following: where the functions define the degree of membership and the degree of nonmembership, respectively, of the element x ∈ X to A, which is a subset of X , and for every x ∈ X : We denote the set of all PFSs over X by PFS(X ). Clearly, 0.7 + 0. 5 1, but 0.7 2 + 0.5 2 ≤ 1. Thus, π A (x) = 0.5099, and hence, Table 1 explains the difference between Pythagorean fuzzy sets and intuitionistic fuzzy sets. Theorem 3.1 Let X = {x i } be a universal set, for i = 1, ..., n and A ∈ PFS(X ). Suppose that π A (x i ) = 0, and then, the following hold: Proof Suppose that x i ∈ X and A ∈ PFS(X ). Then, we prove (i) and (ii). Assume that π A (x i ) = 0 for x i ∈ X : (i) we have the following: (ii) Similar to (i). Example 3.2 Suppose A ∈ PFS(X ) and ν A (x i ) = 0.8. Then: Definition 3.2 [19] Let A ∈ PFS(X ). Then, the complement of A denoted by A c is defined as follows: Definition 3.3 [19] Let A, B ∈ PFS(X ). Then, the following define union and intersection of A and B: Definition 3.4 [19] Let A, B ∈ PFS(X ). Then, the sum of A and B is defined as follows: and the product of A and B is defined as follows: A, B, C ∈ PFS(X ). Then, the following properties hold: Theorem 3.3 Let A ∈ PFS(X ). Then, the following statements hold ∀x ∈ X: Conversely, assume that π A (x) = 0. Then, it follows that: Definition 3.8 Let A, B ∈ PFS(X ). Then, A and B are comparable to each other if A ⊆ B and B ⊆ A. We state the following theorems without prove because of their straightforwardness. Pythagorean fuzzy relation In this section, we propose Pythagorean fuzzy relation as an extension of both fuzzy relation and intuitionistic fuzzy relation [8,52]. The notions of extension principle for fuzzy sets [54,55] and intuitionistic fuzzy set [56], respectively, are paramount to Pythagorean fuzzy relation. In what follows, we define the extension principle for Pythagorean fuzzy sets. A under f , denoted by f (A), is a Pythagorean fuzzy set of Y defined by the following: a Pythagorean fuzzy set of X defined by the following: (x)) and Definition 4.2 Let X and Y be two nonempty sets. A Pythagorean fuzzy relation (PFR), R, from X to Y is a PFS of X × Y characterized by the membership function, μ R and nonmembership function, ν R . A PF relation or PFR from X to Y is denoted by R(X → Y ). Definition 4.3 Let such that its membership and nonmembership functions are defined by the following: and ∀x ∈ X and y ∈ Y , where =maximum, =minimum. Definition 4.4 Let Q(X → Y ) and R(Y → Z ) be two PFRs. Then, the max-min-max composition R • Q is a PFR from X to Z , such that its membership and nonmembership functions are defined by the following: and ∀(x, z) ∈ X × Z and ∀y ∈ Y . Numerical examples Before applying this relation to career placements, we discuss the procedures of the approach step-wisely. We find the composition B using Definitions 4.3 and 4.4, respectively, as follows: Again: Example 4.2 Let Before calculating the max-min-max composition, we rewrite the PFSs; thus: Application of max-min-max composition for Pythagorean fuzzy sets to career placements We localize the idea of PFR as follows. Let S = {s 1 , ..., s l }, C = {c 1 , ..., c m } and A = {a 1 , ..., a n } be finite set of subjects related to the courses, finite set of courses, and finite set of applicants, respectively. Suppose we have two PFRs, R(A → S) and U (S → C), such that { (a, s), μ R (a, s), ν R (a, s) |(a, s) where μ R (a, s) represents the degree to which the applicant, a, passes the related subject requirement, s, and ν R (a, s) represents the degree to which the applicant, a does not pass the related subject requirement, s. Similarly, μ U (s, c) represents the degree to which the related subject requirement, s determines the course, c, and ν U (s, c) represents the degree to which the related subject requirement, s, does not determine the course, c. The composition, T , of R and U is given as T = R • U . This describes the state in which the applicants, a i with respect to the related subjects requirement, s j fit the courses, c k . Thus: ∀a i ∈ A and c k ∈ C, where i, j and k take values from 1, ..., n. The values of μ R•U (a i , c k ) and ν R•U (a i , c k ) of the composition T = R • U are as follows ( Table 4): The career placement can be achieved if the value of T is given by the following: as computed from R and U for the placements of a i into any c k with respect to s j is the greatest. Application example We apply this method using a hypothetical case with a quasireal data. Let Table 2. These data in PF values are assumably gotten after the aforementioned applicants sat for a multiple choice qualification examination on the itemized subjects within a stipulated time. The first entry is the membership value, μ, representing the Pythagorean fuzzy value of the marks allocated to the questions that the applicants answered, and the second entry is the nonmembership value, ν, representing the Pythagorean fuzzy value of the marks allocated to the questions failed. Again, the PFR, U (S → C), is the institution benchmark for admission into the aforesaid courses in PF values. The data are in Table 3. We now find the indeterminate degree, π , for each applicants against the courses. The value of π is the marks loss due to the hesitation in answering within a stipulated time. It is gotten by 1 − [μ 2 + ν 2 ]. The values of π enable us to calculate T . Decision-making on course/career placements We present two forms of decision-making, viz: (1) horizontal decision with respect to applicant against courses, and (2) vertical decision with respect to course against applicants. Decisions are made based on the greatest value of relation between applicants and courses. In accordance to the institution's benchmark for admission and applicants' performance in the qualification examination within a stipulated time, we make the following decisions from Table 5. Horizontal decision This decision-making is based on relation/suitability of the applicants to the list of courses. Ada is suitable to study any of medicine, pharmacy, anatomy, and physiology. Ene is suitable to study any of medicine, pharmacy, and anatomy. Ehi is suitable to study either medicine or pharmacy. Ebo is suitable to study only pharmacy. Ela is more suitable to study any of the courses; Ada is suitable to study. Vertical decision Vertical decision is centered on relation/suitability and competition. It is noticed that: medicine is suitable to be studied by Ela (0.7408) and Ene (0.6293); pharmacy is suitable to be studied by Ela (0.7408) and Ebo (0.6869); surgery is suitable to be studied by Ela (0.6869) and any of Ebo (0.5056), Ehi (0.5056), and Ene (0.5056); anatomy is suitable to be studied by Ela (0.7408) and Ene (0.6293); physiology is suitable to be studied by Ela (0.7408), Ebo (0.5629), and Ene (0.5629). Observations The following observations are deducible from the decisions above. (i) Vertical decision is the most reliable, because it considers suitability/relation and mental ability, and it is competitive. (ii) Ela is the most brilliant applicant with the ability to study all the courses ahead of the other applicants in the order: medicine (0.7408), pharmacy (0.7408), anatomy (0.7408), physiology (0.7408), and surgery (0.6869). (iii) Ene has the ability to study the following courses in the order: medicine (0.6293), anatomy (0.6293), physiology (0.5629), and surgery (0.5056). (iv) Ebo has the ability to study the following courses in the order: pharmacy (0.6869), physiology (0.5629), and surgery (0.5056). (v) Ehi has the ability to study only surgery (0.5056) with the same ability as Ene and Ebo. (vi) Ada is not suitable to study any of the courses in a competitive environment. From the aforesaid discussion, it is meet to assert that the max-min-max composition approach explored in this study is very suitable and decisive, especially in a critical decisionmaking problem like career placements. In fact, without this approach, this exercise would have been compromised with a consequent effect on performance and efficiency. Conclusion The notion of Pythagorean fuzzy sets is a relatively novel mathematical framework in the fuzzy family with higher ability to cope imprecision imbedded in decision-making. In this paper, we have studied the concept of PFS more expressly with relevant illustrations, where necessary. Some important remarks were drawn which differentiated PFSs from IFSs. It was observed that every IFS is PFS, but the converse is not always true. Some theorems on PFSs were deduced and proved, especially on the ideas of score and accuracy functions. We also extended the concept of relation to PFSs, called PF relation (or PFR), and illustrated the concept using numerical examples. Finally, an application of PFSs was explored on course placements based on academic performance using the proposed composition relation. The max-min-max composition introduced in this paper could be used as a viable tool in applying PFSs to MCDM problems, MADM problems, pattern recognition problems, etc. Notwithstanding, it is suggestible to consider this approach from object-oriented perspective for quick output in further research. In addition, some theoretic notions of PFSs and PF relation could still be exploited, and the concept of PFSs could be applied to solve more real-life problems imbedded with imprecision.
2019-03-28T13:14:27.867Z
2019-02-07T00:00:00.000
{ "year": 2019, "sha1": "82952db09589f8d766ef5619061a71395358038a", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s40747-019-0091-6.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "5ef89bcc9c982838a31deb5ba4fc5694e019ed70", "s2fieldsofstudy": [ "Mathematics", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
258498468
pes2o/s2orc
v3-fos-license
Transcranial Electrical Stimulation for Associative Memory Enhancement: State-of-the-Art from Basic to Clinical Research Associative memory (AM) is the ability to bind new information into complex memory representations. Noninvasive brain stimulation (NIBS), especially transcranial electric stimulation (tES), has gained increased interest in research of associative memory (AM) and its impairments. To provide an overview of the current state of knowledge, we conducted a systematic review following PRISMA guidelines covering basic and clinical research. Out of 374 identified records, 41 studies were analyzed—twenty-nine in healthy young adults, six in the aging population, three comparing older and younger adults, as well as two studies on people with MCI, and one in people with Alzheimer’s dementia. Studies using transcranial direct current stimulation (tDCS), transcranial alternating current stimulation (tACS) as well as oscillatory (otDCS) and high-definition protocols (HD-tDCS, HD-tACS) have been included. The results showed methodological heterogeneity in terms of study design, stimulation type, and parameters, as well as outcome measures. Overall, the results show that tES is a promising method for AM enhancement, especially if the stimulation is applied over the parietal cortex and the effects are assessed in cued recall paradigms. Introduction Human memory is one of the most powerful mental processes, which is implicated in a variety of daily experiences and activities-from remembering meaningful events to enabling goal-oriented behavior. Over the last 50 years, evidence-backed cognitive theories [1][2][3][4] categorized memory according to the duration of the storage (sensory, shortand long-term memory), modality (echoic, iconic, motor, haptic), level of awareness and consciousness involved (implicit vs explicit memory), the type of knowledge (declarative vs procedural memory, i.e., knowing what and knowing how), as well as memory domains and content (semantic, episodic, autobiographical). From a functional perspective, each type of memory can be broken down into distinct yet interrelated processes, such as encoding, retention (i.e., storage), and retrieval [5,6]. The process that plays a central role in encoding and storing complex memories and experiences is referred to as binding [7]. Memory binding is the function that integrates multiple elements of complex events into unified wholes. This process is at play whenever multiple items need to be stored together either for immediate manipulation (e.g., in working memory [8]) or subsequent recollection (e.g., in source memory [9]). The umbrella term for binding-dependent memories, regardless of their duration, context, modality, or domain, is associative memory (AM). AM represents the ability to bind previously unrelated pieces of information and store it as a unified representation that is accessible when sought for retrieval [10,11]. Therefore, AM encompasses mechanisms responsible for the formation of declarative, episodic as well as autobiographical memories, and plays an important role in day-to-day functioning. Unfortunately, AM is affected by healthy aging [12] as well as different neuropathological processes [13]. Furthermore, neuropsychological studies show that AM decline is one of the reliable indicators of cognitive impairment [13,14] and one of the prominent early signs of different types of dementia [15]. As memory deficits still do not respond well to pharmacological treatment [16], while there is evidence for their susceptibility to plasticity-based interventions (e.g., cognitive training [17]), recent years have seen an expansion of memory-oriented transcranial brain stimulation research. Transcranial brain stimulation (TBS) or noninvasive brain stimulation (NIBS) refers to a set of techniques that use different physical forces such as magnetic and electric fields, and more recently ultrasound, to harness the brain plasticity capabilities by modulating neuronal excitability and the activity of functional brain networks [18]. Here, we focus on transcranial electric stimulation (tES)-a set of NIBS techniques that use weak electrical currents (usually between 1 and 2 mA) to modulate brain activity aiming at altering behavioral responses [19]. The most used tES is bipolar transcranial direct current stimulation (tDCS), in which two electrodes of opposite polarity are placed on a person's head [20]. The set-up in which a positively charged electrode is placed over the cortical target is referred to as anodal tDCS, whereas cathodal tDCS refers to a negatively charged electrode being placed over the target brain area [20]. Anodal tDCS is presumed to induce facilitatory effects by modulating resting state membrane potential thus increasing cortical excitability [21,22]. Unlike tDCS with constant current flow, transcranial altering stimulation (tACS) applies an oscillating current that shifts polarities between the electrodes [23,24]. These rhythmic changes in the current waveform are assumed to induce the entrainment of neural oscillations to the stimulation frequency leading to an increase in the activation of the targeted structures [25]. For a more detailed overview of the mechanisms of different tES techniques see [18,19,26]. Over the years, these two types of tES have been constantly modified and advanced to improve their effectiveness. To better target specific memory-relevant processes, custom waveforms of current delivery were created. For example, to simultaneously increase excitability and induce frequency-specific effects, oscillatory tDCS (otDCS) protocols have been developed [23]. Likewise, rhythmic stimulation with gamma bursts superimposed at the peak of theta waves to modulate theta-gamma coupling has been tried [27,28]. At the same time, to increase the focality and anatomical specificity of the stimulation, so-called high-density or high-definition (HD-tDCS, HD-tACS) stimulation set-ups were developed [29,30]. Namely, instead of using just two relatively large electrodes, the number, size, and placement of electrodes are adjusted to maximize current density at the relevant cortical region. As the field progresses, and the new and improved tES protocols are implemented, it remains elusive which stimulation parameters contribute to the effectiveness (or lack of it) of tES for memory improvement. This review aims to fill this gap and provide an overview of the increasing number of tES studies intended to modulate AM. Therefore, we will systematize and critically evaluate the current state of knowledge with respect to study designs, tES technique (tDCS/tACS/otDCS/HD-tES), stimulation site, intensity, and other relevant parameters and outcomes (i.e., type of task and the outcome measures). We will focus on basic experimental research involving healthy human subjects and look into the attempts to apply tES in aging populations as well as clinical trials aimed at mitigating AM deficits. Methods The review follows Preferred Reporting Items and Systematic Review (PRISMA) guidelines [31]. Search Strategy and Study Selection We searched PubMed, Scopus, and Web of Science databases using the combination of AM and tES keywords. The titles and abstracts were searched for AM-related terms (asso-Life 2023, 13, 1125 3 of 18 ciative memory, source memory, relational memory, episodic memory, paired associate/s, learning associations, associative encoding, associative binding, face word, cued recall, word pairs), and tES-related terms (transcranial electric stimulation, tES, transcranial direct current, tDCS, transcranial altering current, tACS, HD-tDCS, HD-tACS). The exact syntax terms for each database are enclosed in Appendix A. The database search was conducted on 19 January 2023. To ensure the comprehensiveness of the review, additional records (e.g., pre-prints) were sought through a manual search of Google Scholar using the same keywords (both AM and tES-related terms) as well as the references of the articles selected from the automatic search of the databases. The search was limited to full-text original articles published in English. Study Selection and Eligibility Criteria The initial set consisted of 374 records-369 identified by database search and 5 identified manually. After removing duplicated records, 157 unique records remained. The titles and abstracts of these records were screened against the eligibility criteria. When insufficient information was provided in the abstract, the methods section of the articles was analyzed. Figure 1 presents the study selection PRISMA flow chart. Methods The review follows Preferred Reporting Items and Systematic Review (PRISMA) guidelines [31]. Search Strategy and Study Selection We searched PubMed, Scopus, and Web of Science databases using the combination of AM and tES keywords. The titles and abstracts were searched for AM-related terms (associative memory, source memory, relational memory, episodic memory, paired associate/s, learning associations, associative encoding, associative binding, face word, cued recall, word pairs), and tES-related terms (transcranial electric stimulation, tES, transcranial direct current, tDCS, transcranial altering current, tACS, HD-tDCS, HD-tACS). The exact syntax terms for each database are enclosed in Appendix A. The database search was conducted on 19 January 2023. To ensure the comprehensiveness of the review, additional records (e.g., pre-prints) were sought through a manual search of Google Scholar using the same keywords (both AM and tES-related terms) as well as the references of the articles selected from the automatic search of the databases. The search was limited to fulltext original articles published in English. Study Selection and Eligibility Criteria The initial set consisted of 374 records-369 identified by database search and 5 identified manually. After removing duplicated records, 157 unique records remained. The titles and abstracts of these records were screened against the eligibility criteria. When insufficient information was provided in the abstract, the methods section of the articles was analyzed. Figure 1 presents the study selection PRISMA flow chart. In line with our PICO strategy (Appendix B, Table A1) we included the studies with adult human participants (age ≥ 18 years), either healthy (with or without memory complaints) or with diagnosed memory deficits (e.g., mild cognitive impairment (MCI), dementia). Studies with any type of tES (tDCS, HD-tDCS, tACS, HD-tACS, otDCS) were eligible for inclusion, either having tES as a sole intervention or in combination with other memory-oriented interventions (e.g., cognitive training). At the outcome level, we included the studies that reported on the behavioral assessment of AM by either immediate or delayed cued recall, recognition, or reproduction measures. Only studies with appropriate sham-control conditions and single or double blinding procedures were included. Data Extraction and Analysis The following information was extracted from each study methods section: study population (healthy young adults/healthy aging group, clinical condition), sample size (total sample size and number of participants per group), participants age (mean age and standard deviation or range), study design (between-subjects parallel group design or within subject cross over design), stimulation type i.e., technique (tDCS, tACS, HD-tDCS, HD-tACS, otDCS), duration (minutes of active stimulation), dose (intensity in mA and frequency for oscillatory protocols), electrode positions (intended cortical target e.g., left dorsolateral prefrontal cortex (dlPFC), and electrode positions per 10-10 international EEG system e.g., F3), timing of stimulation in respect to AM assessment task (online protocols-during encoding and/or retrieval; offline protocol-before AM task), number of tES sessions (single or multiple stimulation sessions), time between the sessions, AM task and outcome measures (recognition, cued recall, reaction times, memory confidence). We also extracted reported results about the tES effects on AM-task measures. Results Out of the initial 374 records, after removing duplicates and excluding the studies that did not meet the inclusion criteria (e.g., were not sham-controlled, did not include associative memory measures, assessed associative memory performance only after participants slept, or were not original papers but rather metanalyses or reviews), 41 articles were included in this systematic review-29 on young healthy participants and 12 on aging and clinical population (see Figure 1). The majority of the studies (38 articles) aimed to assess the effects of tES on AM as a primary outcome measure, while three used AM as a secondary outcome. Studies on Healthy Adult Participants Most of the AM-tES research was conducted on healthy adults. A summary of the healthy-subject studies in chronological order is presented in Table 1. Note: Sample size is presented as the total number of participants included in the study as well as the number of participants per group in a parallel group design. The study designs are labeled parallel-group, crossover, or mixed to indicate between-subjects, within-subjects comparisons, or the presence of both repeated and not repeated factors. For online protocols, we indicate the phase of the task in which the stimulation was delivered. For tDCS studies, the minus sign before intensity represents cathodal stimulation; for tACS studies, intensity is presented as peak-to-peak. For tES montages 1 × 1, 1 × 4, and 2 × 3 show the number of electrodes used in the setup; return electrode(s) is always presented second. The results always show the comparison of active against sham condition/group. rlPFC-rostrolateral prefrontal cortex; aDMN-anterior default mode network; pDMNposterior default mode network; ITF-individual theta frequency. The studies were quite diverse with respect to methods and designs. Namely, the sample sizes were between 15 and 40 in within-subject crossover designs (12 studies), and between 26 and 84 in parallel group designs (17 studies), with 11 to 41 participants per group-for power estimates see [60]. The effects were assessed in online protocols where tES was delivered during encoding (18 studies) or retrieval (6 studies), as well as in offline protocols where tES was applied before AM task (4 studies), or during consolidation i.e., between encoding and retrieval (3 studies). All but one study [36] reported immediate effects of tES on AM, with 14 studies reporting on follow-up assessment after 24 h [37,42,44,47,48,52,53,56,57], 5 days [44,52] or 7 days [51,53,57,58]. The cumulative effects of multi-day stimulation were assessed in one study [32], while three studies combined tES with cognitive training [50,51,58]. Most studies (22 articles) assessed the effects of anodal tDCS, only 4 explored cathodal tDCS effects, while frequency-modulated protocols were applied in 11 studies (6 tACS and 3 otDCS). Furthermore, most experiments used bipolar 1 × 1 montage (22 studies), while 7 studies used multielectrode set-ups to deliver HD-tDCS (3 studies), HD-tACS (2 studies) or to optimize current flow to the targeted area. The current intensity was between 0.5 and 2 mA and delivered for 8-30 min. Oscillatory protocols were delivered in theta (6 studies) and gamma frequencies (2 studies) or a combination of the two [28]. Most common stimulation targets were prefrontal (PFC, 64% of studies) and parietal (30%) cortices. Thus, the target electrode was commonly positioned at either F3/F4 or P3/P4 of the 10-10 international EEG system. However, the positioning of the return electrode/s was highly inconsistent, resulting in a diverse set of montages used in the experiments. Moreover, a few studies targeted other brain areas, including the temporal cortex (e.g., [28]) and occipital nerve [58], or used unique montages targeting various brain areas at once [54]. Some versions of standard AM tasks (source memory and tasks where participants had to pair words and/or pictures) were used in all papers, along with standard outcome measures of cued recall (19 studies) or recognition (14 studies). Only a handful of papers report effects on additional measures such as subjective memory confidence (2 studies) or secondary outcome measures such as reaction time (RT) in memory tasks (2 studies). The results mostly show significant tES effects on at least one AM outcome in young healthy participants' experiments. Specifically, 18 out of 29 studies found positive tES effects on AM, 7 studies presented evidence of tES decreasing AM performance, while 4 studies reported null effects on all AM outcomes. There is no apparent relationship between the AM effects and the type of tES protocol applied, as positive effects were reported following anodal tDCS [32,33,44,47,49,52,53,57], tACS [58,59], otDCS [52,59], HD-tDCS [54], HD-tACS [48] and even cathodal tDCS [51]. Studies Conducted on Older Participants or Comparing Older vs. Younger Participants' Effects We found 9 studies assessing tES effects on AM in the context of aging (Table 2). Six studies were conducted on samples of older participants [61][62][63][64][65][66], aged between 53 and 90. while 3 studies compared the effects on AM performance between younger and older age groups [67][68][69]. The studies that assessed tES effects in older samples applied tDCS with standard two-electrode montage over the frontal [62,64,65], parietal [63], or temporal cortex [61]. Only one study applied tACS as well as tDCS [65]. Single-session effects were reported in 7 papers, while cumulative effects were assessed after 3 [61] and 10 sessions [64]. The followup assessments were present in 5 studies [61,64,66,70,71]. Beneficial tDCS effects were reported for cued recall, in one study after stimulation of the temporoparietal cortex [63], and in another that stimulated the occipital nerve [66], while the rest of the studies showed null effects [61,64,65]; one study even showed negative effects [62]. It is of note that three of the later four studies targeted the prefrontal cortex. When it comes to comparison between young and older samples, Leach and colleagues found effects on both cued recall and recognition after applying tACS to dlPFC during encoding, but these effects were found only in the younger group [69]. In contrast, Fiori and colleagues opted for applying tDCS over Wernicke's area during recognition and found positive effects only in the older group [67]. Lastly, Prehn et al. (2017) assessed the effects of combining tDCS with 20 mg citalopram during AM encoding and found it to be superior to solo tDCS or pharmacological treatment in both younger and older participants [68]. Studies on People with MCI and Alzheimer's Disease Two studies assessed the effects of tES on AM in people with mild cognitive impairment (MCI) and one in people diagnosed with Alzheimer's disease ( Table 2). De Sousa et al. found improved cued recall in the MCI group after applying tDCS to the temporal cortex during cognitive training [70]. However, a study that did not combine tDCS with cognitive training reported null effects in people with MCI after 5 tDCS sessions [71]. Finally, the only study that assessed Alzheimer's patients, found that 1-hour gamma-tACS applied over PPC led to improved recognition in face-word tasks [72]. Discussion Tackling memory decline and deficits are one of the great challenges in cognitive neuroscience and neurorehabilitation. Last several years, we are witnessing an increased interest in the application of different NIBS techniques to modulate memory. This systematic review provides insight into the state-of-the-art of applying tES to modulate AM in healthy people and clinical populations with varying levels of cognitive (memory) deficits. Most of the research involved exploring basic mechanisms in healthy adults, a few studies assessed the effects of aging, whereas clinical applications remained largely unexplored. Overall, the evidence presented here suggests that tES is a promising approach for memory enhancement, but the question of optimizing the protocols to increase effectiveness and reduce the variability of the effects is still largely unanswered. Therefore, we focus the discussion on the main challenges and highlight gaps in knowledge to be addressed in the future. Defining the Optimal Stimulation Site/Were Do We Stimulate The hippocampus and the surrounding medial temporal structures play a central role in AM [73,74], but due to their anatomical position cannot be directly modulated by tES [47]. Nonetheless, the formation and retention of memory representations are achieved through interconnectivity within a widespread hippocampo-cortical network, which includes frontal, temporal, and parietal cortices too [75]. Hence, most of the studies delivered tES to one of these cortical regions, aiming at potentially inducting network-wide effects [76]. The frontal areas, specifically left dlPFC, have been the most frequent tES target in AM studies. However, these experiments resulted in mixed findings and questionable specificity of the effects. It could be argued that, even when AM enhancement is achieved, this is conducted mostly via the facilitation of supporting processes such as attention, executive control, or reasoning that are highly dlPFC dependent [77]. On the other hand, the evidence suggests that delivering tES to the temporoparietal or posterior-parietal cortex via different tES protocols can facilitate AM [34,39,44,47,49,52,59] in a persistent and function-specific manner [44,52], even in aging samples [63,68] and persons with Alzheimer's dementia [72]. This is in line with previous neuropsychological and neuroimaging evidence on the role of the parietal cortex, specifically PPC in memory [78,79] and in keeping up with the TMS experiments showing the functional relevance of PPC-hippocampal relay for AM [80][81][82][83]. Unfortunately, none of the studies directly compared the effects of frontal vs. parietal stimulation on AM. Even though PPC seems as the most promising target for delivering tES, the optimal electrode set-up to do so remains elusive. There is evidence showing positive effects of the standard 1 × 1 montages, with the anode over left/right PPC and the return on the contralateral cheek (e.g., [44,59,72]), as well similar electrode placements (e.g., CP5-Fp2) [49,63]. However, alternative electrode setups such as multichannel stimulation [48] or ring electrodes [39,55] also showed memory-modulatory effects. Recent advancement in electrical field modeling allows for optimizing montages to maximize the current density in the desired brain region [84]. Such an approach has been adopted by several experiments (e.g., [39,48]), however, modeling-informed experiment focusing on PPC has not been conducted yet. It is important to note that even when the same cortical area is targeted, a fixed electrode placement across all participants may result in variable outcomes. This may be due to individual differences in anatomy including skull characteristics, brain volume, scalp-tocortex distances as well as overall variability in functional and structural brain properties. Moreover, almost no study has taken into account sex differences in neuroanatomical properties which might be an additional source of variability at the group level. These concerns are corroborated by studies that combine tES with different neuroimaging methods including EEG [85], MRI [86], PET [87], etc. It could be possible to account for the mentioned issues by delivering individual-level neuroimaging-guided tES with electrodes placed and intensities adjusted based on the current modeling for each participant. However, this approach has not yet been implemented in tES-AM studies, therefore its incremental value in reducing variability is yet to be evaluated. Stimulation Protocol/How Do We Stimulate Although all reviewed tES studies on AM applied low-intensity current to modulate brain activity, protocols differed in intensity (dose) and waveform of the current applied. The current intensity was in all studies between 0.5 and 2 mA, which is within recommended safety limits [88]. However, the selected intensities were rarely justified and discussed in the papers. Only one study compared the effects of different stimulation intensities (1 mA vs. 1.5 mA) and found that only 1.5 mA had significant effects on AM [53]. Therefore, in light of the evidence showing a non-linear relationship between current intensity and physiological effects [89], it is difficult to draw conclusions about optimal dosing. Selecting appropriate stimulation intensity is in general an open question in NIBS-based neuromodulation of non-motor cortical areas where there is no direct physiological readout, which could guide the dosing. This is particularly emphasized in the tES application where, in contrast to transcranial magnetic stimulation (TMS), not even a threshold intensity for the motor cortex is available. Moreover, since the current density in the brain tissue depends on individual neuroanatomy, there are strong arguments to move towards individualized dosing [90][91][92]. That is, making sure that the current density in the targeted brain region is equal across all participants, rather than applying the same intensity to all participants [91,92]. As a rule, early NIBS studies of AM modulation applied constant anodal tDCS [32][33][34][35]. This type of tES is expected to increase the excitability of the cortical tissue under the positively charged electrode, and have facilitatory effects on cognitive performance-in this case AM. Nevertheless, tDCS has low spatial focality [93], thus the specificity of the effects is highly questionable, especially if the stimulation is delivered to dlPFC and no controlfunction tasks are included in the study design. Despite some studies showing evidence of function-specific tDCS effects when applied over PPC (e.g., [47,49]), the induced electric fields are widely distributed across different cortical regions. The poor spatial resolution of the classic tES techniques led to the development of the tES techniques aiming at higher focality, such as HD-tDCS [30]. Unfortunately, the first few studies that applied HD-tDCS did not provide convincing evidence for AM neuromodulation [38,39]. The path towards increased specificity of the effects opened with the application of tACS. This type of tES generates oscillating electrical fields that can modulate brain rhythms underlying targeted function causing thus a change in performance [24]. The first tACS study on AM compared the effects of combined theta (5 Hz) and high gamma (80 Hz) frequency stimulation. Namely, across 3 experiments de Lara and collages assessed the effects of sinusoidal stimulation with gamma bursts at peaks or troughs of the theta wave, or continuous gamma-oscillations superimposed on theta frequency [28]. They found high inter-individual variability in the effects, but gamma bursts at the troughs led to a reduced cued recall. The tACS studies that followed resulted in mixed findings [50,58,59]. However, the only study aimed at the clinical application of tACS resulted in AM improvement in a group of people with Alzheimer's dementia [72]. In an interesting attempt to combine increased focality and specificity of stimulation Lang and colleagues [48] delivered HD-tACS (6 Hz) over dlPFC and showed its superior effects in comparison to HD-tDCS delivered using the same electrode montage. However, the theoretically expected superiority of tACS over tDCS was not conceptually replicated in the follow-up studies [55]. As tDCS and tACS employ different, yet not mutually exclusive mechanisms to modulate brain activity, they can be combined to deliver so-called otDCS [23]. In this technique, the current oscillates within the same polarity, which is presumed to induce modulation of brain rhythms at the state of increased excitability. Relying on the relevance of theta-band activity for AM [94,95], the studies that applied theta-otDCS showed impaired cued call when electrodes were positioned over the frontal [43] and improvement when placed over the parietal cortex [52,59]. In a recent study, Živanović et al. [59] comparatively assessed the effects of anodal tDCS, theta-band tACS, and theta-band otDCS, and found all protocols to have facilitatory effects. Moreover, it shed light on how different modes of action can affect AM at different levels of task difficulty-with oscillatory protocols being more effective when the memory load was higher [59]. Therefore, it seems that complementary modes of action, i.e., increased excitability of the relevant cortical regions coupled with network-wide oscillatory entrainment, can be beneficial in promoting mnemonic functions. Timing of Stimulation and the Duration of the Effects/When Do We Stimulate and How Often To modulate cognition tES is either applied in so-called online or offline protocols, that is-either during or before the task [19]. The effects registered in these two types of protocols provide evidence for different neurophysiological changes induced by tES [21]. Namely, the effects during the tES stimulation (online) are dependent either on the changes in membrane potential altering neuronal excitability and modulating response to the incoming signals in tDCS, or changes in the spontaneous function-related neuronal oscillatory activity in tACS, or on a combination of both in otDCS [59]. In contrast, the effects after the tES (offline) are supposed to be mostly driven by LTP-like changes in synaptic strength within relevant functional networks [21]. As for online protocol, in this review, we found 24 studies applying the tES during encoding, 7 studies during the retrieval phase, and, interestingly, none during both phases of the AM task. Applying tES during the encoding carries the implicit assumption that the stimulation will facilitate the binding process and that the storing of so-acquired memories will be deeper and more successful which in turn will result in better retrieval, whereas applying during retrieval could be expected to facilitate access to the stored information. On direct comparison, studies that applied tES during encoding have been more likely to modulate AM. This is in keeping with the idea that tES-induced neuromodulation is affecting the binding which is the central component of AM. Still, exploring if stimulation over a certain brain area facilitates encoding or retrieval (or both), is an interesting one. However, the successfulness of encoding cannot be measured independently of retrieval on the behavioral level, thus this question remains to be addressed by combining behavioral and neurophysiological/neuroimaging data. In addition to enabling us to gain a better understanding of how tES affects different memory processes, online protocols have limited potential when it comes to translating basic NIBS research into aging and clinical applications. To reduce memory deficits, plasticity-inducing offline protocols seem like a more obvious choice. There is evidence that applying tES for 20 min in healthy [44,49,52,59] and people with MCI [70], or even 1 h in people with Alzheimer's [72] can lead to better subsequent AM performance. Still, a single tES session might not be enough to induce lasting behavioral changes (e.g., [42,51]). Repeated administration in multiple sessions over several days could be expected to lead to more consistent facilitatory effects, as was the case in one of the first AM-tES studies [32]. However, studies in older adults and people with MCI did not show AM improvement after 3, 5, or even 10 sessions [61,64,71]. Even so, due to a small number of multiple-session studies, it is difficult to pinpoint if the null results were the consequence of stimulation site, dose, duration, or other tES parameters. It also remains unclear what would be the optimal number of sessions and the time between them for inducing even short-to-midterm lasting changes. The guidance for this can be sought in the studies that included follow-up AM assessment. Namely, experiments on healthy adults showed that the effects of a single tES session could still be observed 24 h later [37,44,44,52,53], while the evidence for 5-or 7-days aftereffects although present is much less convincing [44,52,57,66]. Thus, the practice adopted in the multi-session studies so far, to separate sessions between one and a few days, seems as a reasonable approach for further clinically oriented studies. Outcome Measures AM can be assessed using different behavioral paradigms. The vast majority of tES studies used either cued recall or associative recognition. After learning new associations in the cued recall paradigm, participants are presented with one piece of information (cue), and their task is to recall the item or context it was associated with, while in the associative recognition paradigm, both parts of information are presented in either correct (i.e., same as in learning phase) or recombined manner. This review shows that tES effects are more likely to be detected when cued recall paradigm is used to assess AM [32,33,37,44,47,[52][53][54][57][58][59]. There are several reasons why this might be the case. Although both paradigms are highly dependent on the successfulness of encoding, recognition is less demanding at the retrieval stage. That is, even when equally well encoded, one can fail to recall the information when prompted with the cue, while still being able to correctly recognize it when being shown the unified representation. In keeping up with this, cued recall is often more challenging for participants than recognition, which makes it from a psychometric perspective more sensitive to detect small-size tES effects. Moreover, it could be argued that cued recall is a more focal measure of AM, since in recognition paradigms other processes, such as probability-based decision-making and response style, strategies, and biases, play a significant role too [96]. In addition to main AM outcomes (i.e., % of correctly recalled items or recognized pairs), some studies reported on tES effects on other measures, such as reaction times [39,56] or memory confidence [35,50]. Even though these results do not necessarily provide evidence of AM modulation, they provide insight into how tES affects memory functions. For example, shorter reaction times for correctly recognized pairs [39,56], might point towards tES-facilitated quicker (i.e., easier) access to memory representations. Similarly, increased memory confidence when coupled with better performance might suggest more prominent AM representations. Methodological Concerns: Sample Size, Power Issues, and Blinding To assess tES effects on AM against sham-control conditions, studies adopt either parallel-group (where one group of participants receives sham and one or more groups receive real tES) or cross-over designs (where the same group of participants undergoes both sham and real tES in counterbalanced order). From a statistical perspective, the total number of participants enrolled in the study does not directly translate into the statistical power; what is more important is the number of observations per condition. This review shows that regardless of the study design researchers opt for a similar number of participants (i.e., observations) per condition, usually between 15 and 25. This might be one of the hidden sources of variability in the presented findings. Namely, similar sample sizes across different study designs result in very different power, thus different effect sizes are needed to show statistically significant effects. That is, with the same number of observations per condition, to reach the statistical significance threshold (p = 0.05), tES effects need to be almost 30% larger in the two-group design than in the crossover design (for n = 20 and the power of 0.80; d = 0.91 vs. d = 0.66). With that in mind, it is difficult to say if the null findings presented in the studies with 7, 13, or 15 participants per group [33,35,38,54,56,62,67] are simply the result of insufficient power, or if the facilitatory effects in low-powered studies are type-1 error. On the positive note, higher-powered studies showed modulatory effects of tES on AM in healthy [52,53,59], aging [63], and clinical samples [70,72], which allows for better estimation of the expected effect sizes and more data-driven determination of the sample sizes in the future. To adequately address this issue, the researchers should, whenever it is possible, rely on a priori power calculations to determine the adequate sample size for their study. Even when this was not carried out prior to the data collection, it is useful to report on achieved power, so that the reported effects or the absence of them could be interpreted in the appropriate manner (see [48] for example). Another important issue that might contribute to the variability of the effects is the sham protocol used, its features, and the effectiveness of participants' blinding (i.e., the ability of the participants to distinguish between real and sham stimulation). In the reviewed studies different types of sham protocols were used. The most frequently used approach involved short ramp-up/down periods at the beginning and the very end of the stimulation [31,33,35,37,39,40,[43][44][45][46][47]51,52,55,58,71]. Some researchers opted for a sham protocol with a ramp-up/down only at the beginning of the stimulation [28,43,49,55,65,66,68,71]. However, there were studies applying the same current intensity as in the active stimulation protocol, but for a brief period of time-usually 30 s at the beginning of stimulation [35,39,51,53,57,58,61,64,70]. Lastly, a handful of studies [33,62,69] applied low-intensity current (0.1 mA) throughout the stimulation period, as such a low intensity should not have physiological effects but could still induce some skin sensations. It is difficult to say which of these sham protocols is most effective, as only three studies [52,59,69] reported on the actual effectiveness of sham blinding. All of them showed that the ability to guess when the sham protocol was administered did not affect associative memory performance. Although it is highly desirable that future studies report the data on the effectiveness of sham control, a recent study suggests that participants' beliefs and expectations about the stimulation (active vs sham) do not moderate tDCS effects on memory [97]. Namely, Stanković and colleagues analyzed data from over 200 tDCS sessions and found a lack of placebo-like effects stemming from participants' beliefs about the stimulation type they received-the participants' beliefs did not influence the performance on associative and working memory tasks [97]. Conclusions This systematic review of the current state of the literature focused on the application of different tES techniques to modulate AM in healthy, aging, and clinical populations. In search for the optimal tES technique and protocol to induce meaningful changes in memory performance, we found that studies that reported the strongest effects on AM tend to stimulate over the parietal lobe and use cued recall paradigms. So far, there is more evidence of tDCS effectiveness than other tES techniques. However, this is mainly due to the high prevalence in usage and there is a need to further examine the effects of tACS and otDCS on AM, especially on aging and individuals with memory deficits. Similarly, although we found that online protocols with active stimulation during encoding tend to be effectual, there is sufficient evidence that offline protocols stimulating before encoding are equivalently effective. Further empirical studies focusing on the systematic comparisons between different stimulation protocols and their specific features are needed before translating healthy participants' findings into clinical applications. Syntax used for Scopus search: TITLE-ABS ("associative memory" OR "source memory" OR "relational memory" OR "paired associates" OR "learning associations" OR "paired associate" OR "memory binding" OR "contextual binding" OR "associative encoding" OR "associative binding" OR "face word" OR "cued recall" OR "word pairs" OR "episodic memory") AND TITLE-ABS ("Transcranial direct current" OR "TDCS" OR shopping OR "Transcranial electric stimulation" OR "tES" OR "Transcranial altering current" OR "TACS " OR "Hd-tdcs") Syntax used for the Web of Science search: AB = ("associative memory" OR "source memory" OR "relational memory" OR "paired associates" OR "learning associations" OR "paired associate" OR "memory binding" OR "contextual binding" OR "associative encoding" OR "associative binding" OR "face word" OR "cued recall" OR "word pairs" OR "episodic memory") AND AB = ("Transcranial direct current" OR "TDCS" OR shopping OR "Transcranial electric stimulation" OR "tES" OR "Transcranial altering current" OR "TACS" OR "Hd-tdcs") Appendix B. The PICO Strategy Table Table A1. PICO inclusion and exclusion criteria. PICO Category Included Excluded Participants Adults (over 18 years old): younger and older Healthy or with diagnosed memory deficits (e.g., Alzheimer's, MCI) or with subjective memory complaints
2023-05-05T15:17:13.047Z
2023-05-01T00:00:00.000
{ "year": 2023, "sha1": "a5f9f51a35fb9e7e23f42ee28b6ef683a1add8ee", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/life13051125", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "227351f0b0820b97965ad9a9f7901e88fc6d0c14", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [] }
11273278
pes2o/s2orc
v3-fos-license
Mab_3168c, a Putative Acetyltransferase, Enhances Adherence, Intracellular Survival and Antimicrobial Resistance of Mycobacterium abscessus Mycobacterium abscessus is a non-tuberculous mycobacterium. It can cause diseases in both immunosuppressed and immunocompetent patients and is highly resistant to multiple antimicrobial agents. M. abscessus displays two different colony morphology types: smooth and rough morphotypes. Cells with a rough morphotype are more virulent. The purpose of this study was to identify genes responsible for M. abscessus morphotype switching. With transposon mutagenesis, a mutant with a Tn5 inserted into the promoter region of the mab_3168c gene was found to switch its colonies from a rough to a smooth morphotype. This mutant had a higher sliding motility but a lower ability to form biofilms, aggregate in culture, and survive inside macrophages. Results of bioinformatic analyses suggest that the putative Mab_3168c protein is a member of the GCN5-related N-acetyltransferase superfamily. This prediction was supported by the demonstration that the mab_3168c gene conferred M. abscessus and M. smegmatis cells resistance to amikacin. The multiple roles of mab_3168c suggest that it could be a potential target for development of therapeutic regimens to treat diseases caused by M. abscessus. Introduction Mycobacterium abscessus is a rapid growing mycobacterium. It has emerged as an important pathogen of soft tissue, pulmonary, and disseminated infections in both immunocompromised and immunocompetent patients [1,2,3,4]. The soft tissue infections are mainly due to penetrating trauma or surgery. A study of 86 nontuberculous mycobacterial infections of surgical wound and tympanic membrane in central Taiwan found that 100% of these cases were caused by M. abscessus [5,6,7,8]. M. abscessus is one of the most drug-resistant, rapid-growing mycobacteria [2,9,10]. Like other mycobacteria [11], M. abscessus has a complex hydrophobic cell wall that constitutes an efficient permeability barrier. Based on analyses of genomic sequences, M. abscessus is predicted to produce b-lactamases, aminoglycoside phosphotransferases, and aminoglycoside acetyltransferases that may confer multiple drug resistance [12]. M. abscessus is an intracellular pathogen [13,14]. In culture, M. abscessus exhibits two different colony morphology types referred to as rough and smooth morphotypes [13,15]. These morphotypes correlate with the virulence of M. abscessus, and cells with a rough morphotype are more virulent. The mmpL4b gene in the glycopeptidolipid biosynthesis pathway has been shown to be responsible for switching M. abscessus colonies from a smooth to a rough morphotype [16]. In this study, we identified a gene designated mab_3168c, whose function was unknown, and found that mab_3168c controlled the switching of M. abscessus colony morphology from a rough to a smooth morphotype. We also found that mab_3168c played a role in biofilm formation, intracellular survival, and resistance to antimicrobial agents. Screening and Identification of M. abscessus Mutants with Colony Morphotype Switching In order to identify the genes involved in M. abscessus colony morphotype switching, Tn5 transposon mutagenesis was performed. A mutant designated mab_3168c::Tn (Fig. 1A) that switched its colonies from a rough to a smooth morphotype was identified. Characterization of the genome of this mutant revealed that the transposon was inserted into a place 56 bp upstream from the initiation codon (GTG) of mab_3168c (GenBank accession no. NC_010397) and 76 bp downstream from the stop codon of ispG (mab_3169c) (Fig. 1B). To confirm that this morphotype switching was due to the defect in mab_3168c, the intact mab_3168c gene was cloned into the E. coli/mycobacterium shuttle vector pYUB412A to generate pYUB412A-mab_3168c and then introduced into the mab_3168c::Tn mutant. This complementation was found to almost completely convert the colonies of the mab_3168c::Tn mutant from a smooth back to a rough morphotype (Fig. 1A), suggesting that the mab_3168c gene conferred M. abscessus the ability to form rough colonies. Since this complementation was not complete (Fig. 1A), RT-PCR was performed to determine mRNA levels of mab_3168c. No mab_3168C mRNA band was detected in the samples from the mab_3168c::Tn mutant (Fig. 1C), and mab_3168c mRNA levels in the mab_3168c complemented mutant were approximately 60% that of the wild type (Fig. 1D). This result suggested that the incomplete complementation of morphotype was due to sub-optimal expression of the mab_3168c gene introduced into the mutant. As the Tn5 was inserted 76 bp downstream of the ispG gene, ispG mRNA levels were also determined and found to be the same in the wild type, mab_3168c::Tn mutant, and the complemented strains (Fig. 1C). These results indicated that the Tn5 insertion inactivated mab_3168C but did not affect the expression of its neighboring gene, ispG which encodes 4-hydroxy-3-methylbut-2en-1-yl diphosphate synthase [12,17]. For simplicity, the mab_3168c::Tn mutant and mab_3168c-complemented mab_3168c::Tn mutant will be referred to as the mutant and the complemented mutant, respectively, hereafter. Increased Sliding Motility of the mab_3168c Mutant Previous studies [18] have shown that the smooth strains of M. smegmatis and M. avium have a higher sliding ability. Therefore, the motility of the mutant cells on agar plates was examined. As shown in Fig. 2, the wild type M. abscessuss cells were non-motile (0.0760.02 mm), whereas the mutant cells were highly motile (1.5560.13 mm). The complemented mutant cells regained the non-motile phenotype (0.2560.05 mm). These results indicated that mab_3168c played a major role in inhibiting the motility of M. abscessus. Decreased Cell Surface Hydrophobicity, Biofilm Formation and Lysozyme Susceptibility of the mab_3168c Mutant Since the hydrophobicity of cell surface is associated with the sliding activity of mycobacteria [19], experiments were performed to investigate whether the hydrophobicity of the mutant was altered. M. abscessus cells of the wild type, mutant, and complemented mutant were grown in liquid Middlebrook 7H9 medium without Tween 80 for 3 days. The mutant exhibited a homogenously dispersed culture, whereas the cultures of both the wild type and complemented mutant had a clear supernatant with cells aggregated in the bottom of the culture tube (Fig. 3A). To adjust for possible variations in growth rates of the 3 strains, an aggregation index of each culture, which is the value of the number of aggregated cells divided by that of dispersed cells, was calculated. The mutant culture was found to have an aggregation index less than 2.5, but the wild type and complemented mutant cultures had aggregation indices of 1261.2 and 760.9, respectively (Fig. 3B). These results indicated that the mutant had a greatly reduced ability to aggregate. As cell surface hydrophobicity is a determinant of adhesion [20,21], the ability of the mutant to form biofilms was examined. Cells were grown in wells of a 96-well polyvinylchloride plate for 6 days. As shown in Fig. 4A and 4B, substantial formation of biofilms was observed in the wells of wild type M. abscessus cultures (OD 595 = 0.43060.03). In contrast, biofilm formation by the mutant was diminished (OD 595 = 0.11360.005), and the complemented mutant regained the biofilm-forming ability (OD 595 = 0.37860.03). These results indicated that the mab_3168c gene conferred M. abscessus cells the ability to form biofilms by increasing the cell surface hydrophobicity. The reduced ability of the mutant to form biofilm was not due to decreased growth rate because the mutant cells grew equally well as the wild type cells in Middlebrook 7H9 medium (Fig. 4C). The susceptibility of the mutant to lysozyme was then assessed. Cells were grown in 7H9 broth containing varying amounts (0, 0.5, and 2.5 mg/ml) of lysozyme. The results showed that the mutant cells were much more susceptible to 0.5 mg/ml of lysozyme than the wild type cells (19% vs. 51% survival) (Fig. 5). Cells of the complemented mutant were found to be almost as resistant to lysozyme as those of the wild type (59% vs. 51% at 0.5 mg/ml lysozyme) (Fig. 5). Decreased Intracellular Survival of the mab_3168c Mutant in Macrophages The correlation between lysozyme susceptibility and intracellular survival of M. abscessus cells was then evaluated. THP-1 macrophages were infected with wild type, mutant, and complemented mutant at a multiplicity of infection (MOI) of 1. (Fig. 6A). To confirm this result, confocal microscopy was performed to enumerate intracellular mycobacteria. The result showed that the number of mab_3168c::Tn mutant was significantly lower (average 6 vs. 20 organisms per cell) than that of wild type and complemented (approximately 16 organisms per cell) strains at 72 hours post infection (Fig. 6B). To show that this lower intracellular organism count was not due to death of infected THP-1 cells, the viability of uninfected and infected cells was assessed by determining LDH levels in culture supernatants. As shown in Fig. 6C, similar levels of LDH were observed in the culture supernatants of THP-1 cells infected with wild type, mutant, and complemented mutant (approximately 10% of that of total cell lysate). Very little LDH was detected in the culture supernatant of uninfected cells. Taken together, these results demonstrated that mab_3168c was required for intracellular survival of M. abscessus in macrophages. Bioinformatics Analysis of mab_3168c Bioinformatic analyses of the Mab_3168c protein were performed to investigate its possible functions (data not shown). A large number of proteins with a significant homology to the 270aa Mab_3168c protein were found by BLASTP. Most of these proteins were members of the GCN5-like N-acetyltransferase (GNAT) superfamily. Reversed Position Specific Blast (RPS-BLAST) analyses revealed the presence of an acetyltransferase domain of the pfam00583 family. This domain extends from residues 205 to 259 of the putative Mab_3168c protein and contains the consensus sequence V/I-x-x-x-x-Q/R-x-x-G-x-G/A of acetyltransferases [22]. Results of 3D-PSSM prediction also showed that Mab_3168c bears a strong structural similarity to several acetyltransferases of the GNAT family, especially to the aminoglycoside 69-N-acetyltransferase of Enterococcus faecium. These Increased Susceptibility of the Mutant to Amikacin As the aminoglycoside 69-N-acetyltransferase of Enterococcus faecium contributes to its aminoglycoside resistance [23], the possibility that mab_3168c conferred M. abscessus antibiotic resistance was examined. Cells of the wild type, mutant, and complemented mutant were grown on 7H11 agar plates with or without rifampin, ciprofloxacin, or amikacin. Other aminoglycosides such as kanamycin, neomycin, paromomycin, ribostamycin, and gentamycin B were not tested because the transposon used for mutagenesis contains the kanamycin-resistance gene, which also confers resistance to these aminoglycosides. No difference in susceptibility to rifampin and ciprofloxacin was observed among the three different strains (data not shown). However, cells of the mutant were more sensitive to amikacin than those of the wild type and the complemented mutant (Fig. 7A). In an overnight culture inoculated with 10 7 or 10 6 CFU/ml, the growth of mutant cells was completely inhibited by 20 mg/ml of amikacin, and the growth of the culture inoculated with 10 8 was inhibited by 6.164.4% with a survival rate of (4.163.0)610 25 , which was calculated as the CFUs on the amikacin plate divided by those on the plate without amikacin. Cells of the complemented mutant were almost as resistant as those of the wild type to 20 mg/ml of amikacin with a survival rate of (6.760.5)610 24 and (7.862.0)610 24 , respectively (Fig. 7A). To confirm amikacin susceptibility, the E test was carried out. As shown in Table 1, the amikacin MICs of both the wild type and complemented strains were 4 mg/ml, but the MIC of the mab_3168c::Tn mutant was only 2 mg/ml. To further confirm that the mab_3168c gene conferred resistance to amikacin, it was introduced into a different mycobacterium, M. smegmatis, and then assayed the transformants for their susceptibility to amikacin. The MIC value of M. smegmatis cells containing pYUB412A-mab_3168c was two-fold higher than those containing the vector pYUB412A (0.25 vs. 0.12 mg/ml) ( Fig. 7B and Table 1). Discussion In this study, we showed that loss of mab_3168c expression resulted in alterations in colony morphology, cell surface hydrophobicity, sliding motility, biofilm forming ability, amikacin and lysozyme resistance, and intracellular survivability of M. abscessus. Although bioinformatic analyses revealed the presence of a sequence motif characteristic of the GCN5-related N-acetyltransferase (GNAT), it remains to be determined whether the putative Mab_3168c protein is an acetyltransferase as our repeated attempts to express the mab-3168c gene have not been successful. The relationship between an acetyltransferase and colony morphology was first determined in M. smegmatis [24]. In this organism, disruption of the atf1 gene was found to cause its colonies to switch from a smooth to a rough morphotype [24]. The atf1 gene encodes an O-acetyltransferase which is believed to acetylate glycopeptidolipids (GPLs) [24]. However, we did not observe any differences in the lipid profiles of both the wild type and mab_3168c mutant cells by thin-layer chromatography (TLC) and MALDI-TOF analysis (data not shown). It was unexpected to observe that the cell wall lipid profiles of the wild type with a rough morphotype and the mutant with a smooth morphotype had the same lipid profiles as GPL is believed to make mycobacterial colonies smooth [16,19,24,25]. One possibility is that other cell wall components also affect colony morphology. In mycobacteria, UDP-N-acetylglucosamine (UDP-GlcNAc), a major component of peptidoglycan, is synthesized by the GlmU protein, which is a glucosamine-1 phosphate acetyltransferase. Mutation of glmU has been shown to impair the synthesis of peptidoglycan and reduce the growth of M. smegmatis [26]. It is possible that instead of affecting GPL production, Mab_3168c affects the synthesis of other cell wall components, similar to the GlmU protein. Another gene that has been shown to be associated with colony morphotype of M. abscessus is mmpL4b, which encodes a membrane protein [16]. Deletion of this gene renders M. abscessus unable to produce GPL and to form smooth colonies. The DmmpL4b mutant also loses the sliding motility and the ability to form biofilms. However, it survives better in macrophages. In this study, we found that the mab_3168c mutant formed smooth colonies, gained the sliding motility, but lost the ability to survive inside macrophages. These three properties are opposite to those of the DmmpL4b mutant. However, similar to the DmmpL4b mutant, the mab_3168c mutant also lost the ability to form biofilms. These results suggest that biofilm formation is controlled by multiple mechanisms, and both mmpL4b and mab_3168c genes regulate biofilm formation. Further support of this hypothesis is the finding that inactivation of the lsr2 gene renders M. smegmatis hyper motile and unable to form biofilms [27], very similar to the mab_3168c mutant. The lsr2 mutant also has no changes in cell wall lipid profiles [27,28]. In M. tuberculosis, Lsr2 is a nucleoid-associated protein, similar to the histone-like nucleoid structural protein H-NS [29,30] and is involved in the regulation of cell wall synthesis [31] as well as transcription suppression of many genes [30]. Although lsr2 and mab_3168c mutants are phenotypically similar, lsr2 and mab_3168c genes are distinct with no significant sequence homologies. The mab_3168c gene was shown to confer M. abscessus resistance to amikacin which is a semisynthetic aminoglycoside derived from kanamycin. Many bacteria that are resistant to gentamicin and tobramycin are sensitive to amikacin. Therefore, amikacin is often used to treat M. abscessus infections [32]. Aminoglycoside resistance may be due to decreased cell permeability, alterations in ribosome binding, or inactivation by aminoglycoside modifying enzymes [33]. It is likely that mab_3168c is involved in cell wall synthesis making M. abscessus cells less permeable to amikacin. It is also possible that Mab_3168c acetylates amikacin rendering it inactive. In addition to Mab_3168c, M. abscessus may also produce other enzymes that can inactivate antibiotics as analyses of genomic sequences revealed its potential to produce b-lactamases and aminoglycoside converting enzymes including aminoglycoside Ophosphotransferases and aminoglycoside N-acetyltransferases [12,34,35]. This property could explain the multiple drug resistance of M. abscessus. We also found that inactivation of mab_3168c decreased the ability of M. abscessus to survive inside macrophages. This defect is likely due to increased susceptibility to lysozyme. This possibility is supported by the finding that disruption of the aminoglycoside 29- N-acetyltransferase gene, acc(29)-Id, renders M. smegmatis susceptible to lysozyme [36]. In conclusion, compared to the known characteristics of different members of the GNAT superfamily, we predict that Mab_3168c is an N-acetyltransferase. Inactivation of mab_3168c may cause changes in the structure of the cell wall, resulting in a pleiotropic phenotype of M. abscessus with altered colony morphotype, increased sliding motility, reduced cellular aggregation and ability to survive inside macrophages, and increased susceptibility to amikacin. Since mab_3168c plays a role in many different cellular functions, it could be a good target for development of drugs against M. abscessus. Transposon Mutagenesis and Genetic Analysis of Mutants The M. abscessus transposon mutant library was generated using the EZ-Tn5 TM ,KAN-2.Tnp Transposome TM Kit (EPICEN-TRE, USA). Approximately 2000 mutants were screened to detect the ones with altered colony morphology. The Tn5 insertion site in the chromosome was identified by inverted PCR using KAN-2 FP-1 forward and KAN-2 RP-1 reverse primers (Fig. 1B) and by DNA sequencing. Complementation of Mutants with mab_3168c The mycobacterial shuttle vector pYUB412 [37] was used as a backbone in which the hygromycin-resistance gene was replaced by an apramycin-resistance gene to construct pYUB412A. Genomic DNA of M. abscessus was used as template for PCR to amplify the region encompassing the whole-length mab_3168c gene and 1 kb upstream of the gene, using the primer pair mab_3168c-F (59-AGGTATACCATCTTCGCGGCGAT-39) and mab_3168c-R (59-AGCTCGAGTTAGCTGACGGGGA-39), containing BstZ17I and XhoI sites (underlined), respectively. The resulting 1.8-kb DNA fragment was cloned into pYUB412A between ZraI and SalI sites, generating plasmid pYUB412A-mab_3168c. For complementation, pYUB412A-mab_3168c was introduced into mab_3168c::Tn mutant by electroporation. Electrocompetent M. abscessus cells were prepared by growing them to mid-log phase. The cells were harvested, washed in 10% glycerol and then resuspended in cold 10% glycerol at a concentration of 10 7 cells/ml. Sliding Motility Test One colony of each mycobacterium was inoculated in the center of a motility plate, consisting of Middlebrook 7H9 with 0.3% agar. The inoculated plates were incubated at 37uC for 5 days [18]. The sliding distance was measured in millimeters. Aggregation Capability Assay Mycobacterial cells were incubated in a tube containing 5 ml of Middlebrook 7H9 broth at a concentration of OD 600 = 0.1 and incubated on a shaker at 37uC for 3 days. After allowing the culture tube to stand still for 3 hours, the upper portion of the culture containing dispersed cells was removed, and its OD 600 value was determined. The OD 600 of the bottom portion of culture was measured after the aggregated cells had been completely suspended by vortexing with glass beads of 4.5 mm in diameter (Biospec, USA) as described previously [13,38,39]. The aggregation index was calculated as the ratio of optical density of aggregated cells to that of dispersed cells. Biofilm Formation Mycobacterial cells at a concentration of OD 600 = 0.1 were inoculated in 100 ml of Middlebrook 7H9 broth in each well of a sterile 96-well, flat-bottom polyvinylchloride plate (BD, USA). After 6 days of incubation, the medium in each well was removed, and the wells were washed with sterile PBS to remove nonadherent cells. The wells were then stained with 0.5% (wt/vol) crystal violet for 1 hour. After washing with PBS, the stained biofilms were photographed. To quantitate cells, cells in the biofilm were suspended in 100% ethanol, and the OD 595 value of the cell suspension was determined [40]. Lipid Extraction and Analysis Total lipids from mycobacterial cells of plate-grown cultures were extracted with chloroform/methanol (2:1, v/v) at 56uC for 60 min with sonication. The extracted lipids were spotted on an aluminum-backed silica gel 60 TLC plate (MERCK, German) and resolved with a solvent containing chloroform and methanol at a ratio of 90:10 (v/v) or chloroform, methanol, and water at a ratio of 100:16:2 (v/v) or 60:16:2 (v/v) as previously described [41,42]. To visualize lipids, the plate was sprayed with 1% 1-naphthol, 5% H 2 SO 4 in ethanol and then charred with a heat gun until spots with hues characteristic of different lipid classes appeared. The mass of each lipid species was determined by matrix-assisted laser desorption ionization-time-of-flight (MALDI-TOF) spectrometry with a pulse laser emitting at 337 nm. Samples were mixed with 2,5-dihydroxybenzoic acid as the matrix and analyzed in reflectron mode with an accelerating voltage of 25 kV. Lysozyme Susceptibility Assay M. abscessus cells (10 7 /ml) were inoculated into 7H9 broth containing various concentrations of lysozyme (0.5 mg/ml and 2.5 mg/ml) and incubated at 37uC for 24 hours, followed by enumeration of CFU on 7H11 agar plates. Infection of Human Macrophages THP-1 cells, a human acute monocytic leukemia cell line, were obtained from the American Type Culture Collection (ATCC) and cultured in RPMI 1640 medium supplemented with 10% fetal bovine serum (FBS, GIBCO) at 37uC in a humidified CO 2 incubator. THP-1 cells were differentiated into adherent macrophages by adding 500 ng/ml of phorbol-12-myristate-13-acetate (PMA) to the culture. Two days after addition of PMA, the cells were infected with mycobacteria at a multiplicity of infection (MOI) of 1 for 2 hours at 37uC. The infected macrophages were washed with sterile PBS to remove extracellular mycobacteria, lysed with 1% Triton X-100, and then plated on 7H11 agar plates to determine the colony forming unit of intracellular mycobacteria as describe previously [43,44,45]. Viability of M. abscessus-infected THP-1 macrophages was evaluated by measuring the levels of lactate dehydrogenase (LDH) in culture supernatants. THP-1 macrophages were infected with wild type, mab_3168c::Tn mutant, and complemented mutant. The levels of LDH activity in the culture supernatants were determined using a CytoTox 96 assay kit (Promega, USA) according to manufacturer's protocol. Confocal Microscopy Infected macrophages were fixed with 4% paraformaldehyde for 15 min and permeabilized with 0.1% Triton-X 100 for 20 min. The intracellular mycobacteria were stained with auramine (Sigma, USA) for 20 min at 25uC, treated with 0.5% acid alcohol for 3 min, and then examined under a confocal laser scanning microscope (Leica SP5 confocal Microscopy equipped with a 100X NA1.4 objective lens). Antimicrobial Susceptibility Test Susceptibility to amikacin was determined by the E test. M. abscessus and M. smegmatis cells were incubated in 7H9 medium until their culture turbidity reached McFarland standard of 1.0 (,3610 8 CFU/ml). This cell suspension was then spread on a 7H11 agar plate (10 cm) supplemented with 10% OADC using a cotton swab. An amikacin E test strip (Oxoid) was then placed on the plate, and the plate was incubated for 3-5 days until the MIC was read. The value shown on the strip at the place where the strip intersected the growth inhibition zone was the amikacin MIC of the organism tested. The MIC data presented were average of duplicate determinations.
2016-05-04T20:20:58.661Z
2013-06-28T00:00:00.000
{ "year": 2013, "sha1": "b03e6693554622cf34204f582c5ec94002f26455", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0067563&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b03e6693554622cf34204f582c5ec94002f26455", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
490935
pes2o/s2orc
v3-fos-license
High On-Aspirin Platelet Reactivity and Clinical Outcome in Patients With Stable Coronary Artery Disease: Results From ASCET (Aspirin Nonresponsiveness and Clopidogrel Endpoint Trial) Background Patients with stable coronary artery disease on single-antiplatelet therapy with aspirin are still at risk for atherothrombotic events, and high on-aspirin residual platelet reactivity (RPR) has been suggested as a risk factor. Methods and Results In this randomized trial, the association between platelet function determined by the PFA100 platelet function analyzer system (Siemens Healthcare Diagnostics, Germany) and clinical outcome in 1001 patients, all on single-antiplatelet therapy with aspirin (160 mg/d) was studied. Patients were randomized to continue with aspirin 160 mg/d or change to clopidogrel 75 mg/d. A composite end point of death, myocardial infarction, ischemic stroke, and unstable angina was used. At 2-year follow-up, 106 primary end points were registered. The prevalence of high RPR was 25.9%. High on-aspirin RPR did not significantly influence the primary end point in the aspirin group (13.3% versus 9.9%, P=0.31). However, in post hoc analysis, patients with von Willebrand factor levels or platelet count below median values and high on-aspirin RPR had a statistically significant higher end point rate than that of patients with low RPR (20% versus 7.5%, P=0.014, and 18.2% versus 10.8%, P=0.039, respectively). The composite end point rate in patients with high on-aspirin RPR treated with clopidogrel was not different from that of patients treated with aspirin (7.6% versus 13.3%, P=0.16). Conclusions In stable, aspirin-treated patients with coronary artery disease, high on-aspirin RPR did not relate to clinical outcome and did not identify a group responsive to clopidogrel. Post hoc subgroup analysis raised the possibility that high on-aspirin RPR might be predictive in patients with low von Willebrand factor or platelet count, but these findings will require confirmation in future studies. Clinical Trial Registration URL: http://www.clinicaltrials.gov Unique identifier: NCT00222261. (J Am Heart Assoc. 2012;1:e000703 doi: 10.1161/JAHA.112.000703.) D espite the well-documented efficacy of aspirin in reducing myocardial infarction, stroke, and death, some patients on aspirin experience new cardiovascular events. This has led to the introduction of the concepts of aspirin nonresponsiveness, aspirin resistance, or high on-treatment residual platelet reactivity (RPR). 1,2 Several reports have shown, by laboratory testing, insufficient platelet inhibition in 5% to 40% of aspirin-treated patients. Different laboratory methods have been used in the evaluation of response to aspirin, such as platelet reactivity index, platelet aggregate ratio, platelet aggregation (optical or by impedance), the PFA100 platelet function analyzer method (Siemens Healthcare Diagnostics, Germany), and lately the VerifyNow Aspirin method (Accumetrics, San Diego, CA). 3 Different mechanisms have been discussed as potential explanations of the laboratory phenomenon of aspirin nonresponsiveness. 1,[4][5][6] Several studies have documented a substantial inhibition of the production of thromboxane A 2 in aspirin-treated patients with high RPR, and thus the term aspirin resistance seems inappropriate. 7 The clinical relevance of high on-aspirin RPR has been addressed in many trials. [8][9][10][11][12][13][14] Nevertheless, current guidelines do not recommend routine use of platelet function tests in aspirin-treated patients. 1,15,16 New antiplatelet drugs for long-term treatment of patients with coronary artery disease (CAD) have become available for clinical use. 2 Clopidogrel has been widely used for the past decade in combination with aspirin in high-risk patients, often for a time-limited period. Clopidogrel also has been used in monotherapy as an antiplatelet agent in patients with contraindications to aspirin. 17 Both drugs have shown large variations in the frequency of on-treatment RPR. [18][19][20][21] It is not known whether patients with high on-aspirin RPR are better protected with clopidogrel. The aim of the present study was to investigate the influence of high on-aspirin RPR on clinical outcome after 2 years in patients with documented CAD. The hypothesis was that high on-aspirin RPR as measured with the PFA100 method would translate into an increased rate of clinical end points after 2 years. In addition, we hypothesized that patients with high on-aspirin RPR would benefit from clopidogrel treatment. Study Design The ASpirin nonresponsiveness and Clopidogrel Endpoint Trial (ASCET) is a single-center, randomized open trial (double blinded for the results of platelet function tests), investigating the clinical outcome over a minimum period of 2 years in aspirin-treated, stable CAD patients as related to their RPR. Patients (n=1001) were randomized to either continue aspirin 160 mg/d or change to clopidogrel 75 mg/d after having given written informed consent in accordance with the recommendation of the revised Declaration of Helsinki. Randomization was undertaken by using consecutively numbered nontranslucent envelopes with computerized random allocation to the treatment groups. The clinical outcome was related to the patient's response to aspirin at baseline, assessed by the PFA100 method. Compliance to aspirin therapy was assessed by determination of serum thromboxane B 2 (TxB 2 ) levels and by written questionnaires. Follow-up visits were scheduled after 1, 12, and 24 months. The study was approved by the local ethics committee. The details of the design have been published previously. 22 The ASCET study is registered at www.clinicaltrials.gov (identification No. NCT00222261). Study Patients The study included clinically stable patients of both sexes who were 18 to 80 years of age, had angiographically documented CAD, and were on long-term single-antiplatelet therapy with aspirin (160 mg/d) at randomization. Patients were not included as long as there still was an indication for dual-antiplatelet or warfarin treatment or if there were contraindications to any of the study drugs. Pregnant or breastfeeding women and pa-tients with psychiatric disease or alcohol or drug abuse that could reduce patient compliance were also not included. Baseline characteristics of the study population are presented in Table 1. Values are given as n (%), mean±SD, mean (range), median (25th, 75th percentiles), or percentage, as indicated. RPR is defined as residual platelet reactivity assessed by PFA100; hypertension, previously diagnosed hypertension and/or currently treated hypertension; current smoking, regular tobacco smoking or <3 months since smoking cessation; and diabetes mellitus, previously diagnosed diabetes or fasting glucose ≥7 mmol/L. Because of a resolution of Rikstrygdeverket, Norway, the Plavix tablets were covered by the Act of National Insurance Administration. The hospital pharmacy stored and delivered the study drugs. Patients randomized to clopidogrel 75 mg/d were not given loading doses. The treatment was initiated on the day of randomization. All other medications were given according to current guidelines. Laboratory Methods All blood samples were drawn between 8:00 and 10:30 AM in fasting condition ≈24 hours after the most recent intake of medication. Routine analyses were performed with conventional laboratory methods. Citrated whole blood (sodium citrate [0.129 μmol/L in dilution 1:10]) was used for platelet function testing, which was carried out within 2 hours after blood sampling. The PFA100 system (Siemens Healthcare Diagnostics, Germany) simulates platelet-based primary hemostasis in vitro. A syringe aspirates citrated whole blood under steady-flow conditions through a small aperture cut into a membrane in the test cartridge. The membrane in the cartridge used is coated with type I collagen and epinephrine. The time necessary for the occlusion of the aperture is defined as closure time, which is indicative of platelet function in whole blood. To define the cutoff value for high RPR, we tested 200 CAD patients not on antiplatelet therapy (from the warfarin group of the Warfarin, Aspirin, Reinfarction Study [WARIS II]), and the 95th percentile in this group was used, giving a cutoff value of 196 seconds. 8 The term high RPR as defined by the PFA100 method has been used throughout in accordance to the study protocol. 22 Whole blood without anticoagulants was collected and kept at 37 • C for 1 hour before centrifugation at 2500×g for 10 minutes for serum TxB 2 determination (Amersham Thromboxane B 2 Biotrak Assay, GE Healthcare, Buckinghamshire, UK). Von Willebrand factor (vWF) was measured in citrated plasma with a commercial ELISA method (Asserachrom vWF Ag, Stago Diagnostica, Asnieres, France). Clinical Endpoints The primary end point includes the first event of the composite of unstable angina with ECG changes or levels of cardiac markers not to be classified as a myocardial infarction, nonhemorrhagic stroke, myocardial infarction, or death. In all patients who were unable to attend the final visit, the clinical end points were recorded on request. Evaluation of end points was performed by an end point committee without access to the treatment code. Internationally accepted diagnostic criteria were used. A random selection of 50 (5%) of the completed Case Record Forms were monitored and approved by an independent consultant. Bleeding Classification Major bleeding was defined as bleeding requiring transfusion of blood or surgical intervention. Intracranical bleeding was always classified as major bleeding. Minor bleeding was defined as bleeding not requiring transfusion or surgical intervention, including subcutaneous bruising, minor hematomas, and oozing from puncture sites or gums. Adverse Events All adverse events for either of the study drugs were recorded throughout the study period. If considered necessary, the study drug was terminated. In the case of any serious event, the National Drug Authority was notified. Statistical Analysis The observation time was a minimum of 2 years per patient. The number of patients needed to obtain a 40% reduction in the composite end point in patients with low on-aspirin RPR as compared to patients with high on-aspirin RPR (from 32% to 18%), provided that 30% had high RPR, was calculated to be 500, with type 1 error of 5% and 80% power. An additional 500 patients were included for the clopidogrel treatment. Continuous variables are presented as mean±SD or median (25 th , 75th percentiles) in skewed variables. Categorical variables are presented as numbers or percentages, as appropriate. Group comparisons were performed by Student unpaired t tests or Mann-Whitney U tests when appropriate for continuous variables and by the χ 2 test or Fisher exact test for categorical variables. The analysis of end points was performed by the intention-to-treat principle, and rate ratios were calculated by using follow-up time as denominator in a 2×2 table (person-years model). 23 For the group continuing on aspirin, patients with high RPR, as compared to patients with low RPR, were analyzed with regard to composite end points. This association was quantified by the crude odds ratio (OR) with 95% confidence interval (CI). Stratification analyses based on differences between the groups were performed. The Breslow-Day test of heterogeneity was used to pinpoint effect modification before quantifying potential confounders by the Mantel-Haenszel method. Because none of the tested variables appeared to be confounders, no adjustments were performed. The log-likelihood ratio test was performed to compare the model with and without interactions. Estimation of the OR for the different levels of interaction variables was done by using the variance-covariance matrix to estimate the correct variance of the OR. 24 In the post hoc analyses, adjustments for relevant covariates (differences between groups at the level of P<0.20) were performed by logistic regression analyses. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) guidelines were followed. 25 Administrative Matters The ASCET study had a Steering Committee for monitoring the study's progress and for quality evaluation. An independent Data Safety and Monitoring Board was allowed access to the database during the study to assess data quality and evaluate the number of adverse events. All end points and serious adverse events were ultimately evaluated by the Data Safety and Monitoring Board before study closure. Results During the enrollment period (March 2003 to July 2008), 2358 patients undergoing coronary angiography because of clinical symptoms of CAD were screened for enrollment (Figure 1). A total of 1001 patients were enrolled in the study after consideration of inclusion and exclusion criteria, logistics, and patient consent. All patients were followed up for 2 years, and the study was completed in July 2010. Baseline characteristics of the total population are given in Table 1. The number of patients randomized to continue on aspirin was 502, and 499 were randomized to clopidogrel. The 2 randomized groups were well balanced with regard to all baseline characteristics, and there were no differences in the frequency of high on-aspirin RPR between the groups. The number of patients discontinuing study medication without having reached an end point was 95 (9.5%). This was mainly due to a new indication for dual-antiplatelet treatment ( Table 2). The total number of primary end points recorded was 106 (10.6%), which was the first event of unstable angina (n=33 RPR in the Total Study Population In the total population, the frequency of high RPR on aspirin treatment evaluated by the PFA100 method at randomization was 25.9%. The mean level of serum TxB 2 was 2.7 ng/mL (range, 0 to 21.0 ng/mL) at randomization, independent of on-aspirin RPR. High On-Aspirin RPR in the Aspirin Group The number of patients with high RPR according to PFA100 was 128 (25.5%). Patients with high RPR had significantly higher vWF levels (P<0.001) and platelet counts (P=0.05) ( Table 3). The associations between the presence of high RPR and clinical end points are shown in Table 4. The number of composite end points in the aspirin group was 54 (10.8%). High on-treatment RPR did not significantly influence the end point rate (17 of 128 [13.3%]) versus that obtained in patients with low RPR (37 of 374 [9.9%]) (P=0.31). There was also no statistically significant influence of high RPR on the different components of end points, although the proportion of patients who experienced a myocardial infarction was higher in patients with high RPR (6.3% versus 2.7%, P=0.07). vWF and platelet count were found to be effect modifiers. Therefore, separate analyses were undertaken with stratification on their respective median values. Patients with low vWF (≤106%) or low platelet count (≤227×10 9 /L) and high on-aspirin RPR had a significantly higher end point rate than that of patients with low on-aspirin RPR (20% versus 7.5%, OR 3.06, P=0.014, and 18.2% versus 10.8%, OR 2.25, P=0.039, respectively) ( Table 5). High On-Aspirin RPR in the Clopidogrel Group In the group randomized to clopidogrel (n=499), there was no significant difference in end point rate between patients with high and low on-aspirin RPR ( High RPR in Total Study Population The composite end point rate in patients with high onaspirin RPR treated with clopidogrel was not different from patients treated with aspirin (7.6 versus 13.3%, P=0.16) ( Table 4). Low RPR in Total Study Population In patients with low on-aspirin RPR at baseline (total n=741), there was no significant difference in end point rate between the aspirin group and the clopidogrel group (38 of 337 [11.3%] Bleedings and Adverse Events During the follow-up period, there were 130 bleeding episodes, 7 major and 123 minor. There was a significantly lower frequency of total bleedings in the aspirin group than in the clopidogrel group ( Discussion The results from the present study show that high on-aspirin RPR, as determined by the PFA100 method in patients with CAD continuing on aspirin, did not predict clinical outcome after 2 years of follow-up. Nevertheless, borderline significance was noted with regard to the risk for myocardial infarction in patients with high on-aspirin RPR. The ASCET study is, to the best of our knowledge, the first prospective, randomized trial in which platelet function testing has been related to clinical events in patients on singleantiplatelet therapy with aspirin. Recent evidence has shown that aspirin given once daily does not provide stable 24-hour antiplatelet protection in all patients with CAD. 6 In the present study, all patients were tested in fasting condition 24 hours after intake of 160 mg aspirin, contributing to more uniform pharmacokinetics in the studied population. During recent decades, morbidity and mortality rates in patients with CAD have decreased. 26,27 Despite a relatively large study population (n=1001), the observed end point rate was less than half of that expected and upon which the power calculation was performed. This explains the negative overall results from the study. Our main results are not in accordance with previous trials in which platelet function assessed by the PFA100 method predicted outcome after coronary angioplasty. 10,11,14 Similarly, a meta-analysis of 20 trials, including 2930 patients, demonstrated an overall OR for clinical events to be 3.85 (95% CI, 3.08 Values are given as n (%), mean±SD, mean (range), median (25th, 75th percentiles), or percentage, as indicated. * P for differences between groups. Definitions as given in Table 1. to 4.80; P<0.001) in patients with high on-aspirin platelet reactivity as determined by PFA100 and other platelet function tests. 13 The PFA100 method, which was the only "point-ofcare" test available when our study was started, has some limitations. It is only partly dependent on platelet cyclooxygenase-1 (COX-1) activity, and the low COX-1 specificity might to some degree explain the diverging results in studies on aspirin nonresponsiveness. 4,28 Nevertheless, in our study, all patients on aspirin had low serum TxB 2 levels, indicating that their COX-1 pathway of platelet activation was largely inhibited. 29,30 The high RPR seems therefore to depend more on platelet activation via other activation mechanisms. It has been reported that the PFA100 method has a lower predictive value than that of COX-1-specific tests (platelet aggregometry tests). 3 The PFA100 method might be more relevant than the more specific tests, because it records the RPR in aspirin-treated patients while their COX-1 pathways are inhibited. 31 The closure time with the PFA100 method is shown to be prolonged in patients with very low levels of vWF. 32 No patients in our study had pathologically low levels; thus, any influence of low levels of vWF on the frequency of high on-aspirin RPR should be disregarded. Nevertheless, patients continuing on aspirin with vWF levels below median value (106%) and high on-aspirin RPR had a statistically significant higher end point rate than patients with low RPR. This has not been described previously, and a possible explanation might be that patients with high on-aspirin RPR despite their lower vWF levels have other, more dominant platelet-activating pathways that are not inhibited by aspirin. High RPR has been associated with high platelet count. 5 This is in line with our findings. Patients with high on-aspirin RPR despite below-median platelet count (227×10 9 /L) had a significantly increased end point rate when compared to patients with low on-aspirin RPR. This might be explained by increased platelet turnover and an increased fraction of circulating immature platelets, which also might increase the event risk. 5 It should be pointed out that the findings of higher end point rate in patients with below-median values of vWF and platelet count are post hoc analyses not included in the primary aims of the study. These findings should therefore be interpreted carefully and might be hypothesis generating for further studies. In patients randomized to clopidogrel (n=499), the end point rate did not differ from the group that continued aspirin. This is in accordance with the results from the Clopidogrel versus Aspirin in Patients at Risk of Ischaemic Events (CAPRIE) trial. Although this trial showed an overall benefit of clopido-grel treatment versus aspirin, patients entering the trial because of CAD did not benefit from clopidogrel treatment. 17 The composite end point rate in patients with high on-aspirin RPR treated with clopidogrel was not different from patients treated with aspirin (7.6% versus 13.3, P=0.16). We could not demonstrate that these patients would benefit from changing aspirin treatment to clopidogrel treatment. The observed, not statistically significant reduction in end point rate (43%) might, however, support the suggested hypothesis that platelet activation mechanisms other than the COX-1 pathway might be more important for platelet activation in patients with high on-aspirin RPR. 31 Patients' compliance with aspirin therapy, as determined by serum TxB 2 in all patients, was excellent. The median TxB 2 level was 2.7 ng/mL. In patients not on aspirin, the levels are typically ≈200 to 300 ng/mL. 8,33 Thus, the lack of response to aspirin cannot be explained by noncompliance. The main limitation in our study was lack of statistical power. Less than half of the estimated number of events dramatically reduced the possibility for statistically significant effects on clinical outcome. The estimation of end point rate was based on data from similar populations available in 2002. 34 The reports on low predictive values of the PFA100 method, when compared to COX-1-specific tests like platelet aggregometry, might be considered a limitation in the present trial, even though it could be argued that a COX-1-nonspecific test can be more clinically relevant for identifying the RPR in aspirintreated patients. 3,31 In addition, any influence of "high onclopidogrel RPR" on end point rate has not been taken into consideration. Conclusions Response to aspirin treatment evaluated with the PFA100 method did not influence the overall clinical outcome and did not identify a group responsive to clopidogrel in stable CAD patients on single-antiplatelet therapy with aspirin after a follow-up of 2 years. Post hoc subgroup analysis raised the possibility that on-aspirin RPR might be predictive in patients with low vWF or platelet count, but these findings will require confirmation in future trials.
2016-05-12T22:15:10.714Z
2012-06-01T00:00:00.000
{ "year": 2012, "sha1": "3666dfb57ac9455181b764f2f01faf946c4a6da9", "oa_license": "CCBY", "oa_url": "https://www.ahajournals.org/doi/pdf/10.1161/JAHA.112.000703", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "df228e0ff65048320eef14536425ee32fa814f13", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
225064857
pes2o/s2orc
v3-fos-license
Efficiency Improvement of on-Grid Solar PV System Using Hybrid MPPT Solar photovoltaic systems are most commonly used renewable resources nowadays. These pv cells are eco-friendly, pollution free, easy in construction and are compact in size. The solar pv cell generates electricity by simply tracking the sun rays. The main advantage of solar pv cell is that it is dependent on sun light and the sun is available everywhere. The power generated by photovoltaic cell has less efficiency. Thus, this paper proposes the hybrid maximum power point tracker for improving the efficiency of generated energy of grid connected solar pv system.\ KeywordsHybrid MPPT (Maximum PowerPoint Tracking), solar photovoltaic cell, solar panel. I. INTRODUCTION With the increasing cost of fossil fuels and growing rate of air pollution in the conventional energy resources, solar energy is one of the best non-conventional energy resources that replaces all the problems associated with the fossil fuels [1], [2]. The solar energy has many advantages over non-conventional energy resources such as low cost, eco-friendly, noise-free, pollution free environment by promoting green energy [3], [4]. In upcoming years, there is the huge demand of energy since the electrical loads are went on increasing from day by day as the number of customer increases [5]. Therefore, in future it will be very difficult to the utility system to fulfil the energy demand and if the demand will not be fulfilled then there will be a blackout or power failure [6]. Thus, it is required to take an inventive step to fulfil the power demand of the customers. So, for this, an alternative renewable energy resources can used as a distribution generation. Hence by installing the solar pv system as the DG there will be less burden on the utility system [7], [8]. Beside some advantages solar panel has some disadvantages such as-1. The capital cost and equipment cost of solar pv system is high. 2. The efficiency for the energy conversion process of solar pv system is low. 3. More right of way is required for installation. 4. Weather condition dependent. Hence in order to eliminate the affordable drawback, a hybrid based maximum power point tracking system is designed that can improve the efficiency of the solar pv system [9]. By placing the MPPT at a suitable location in the solar pv array, the efficiency of solar energy to electrical energy conversion is increased up to 60 percent. Hence, this paper focus on improving the efficiency of grid connected solar pv system by employing hybrid MPPT. Fig.1 shows the block diagram of grid connected photovoltaic system. The system comprises of plurality of photovoltaic cells preferably a photovoltaic array connected in series with the system for extracting the solar energy and converting it into electrical energy, a buck boost converter in association with the photovoltaic array for stepping up the dc voltage coming out from the photovoltaic array, an MPPT coupled with buck boost converter for extracting the maximum power from the sun thereby improving the efficiency of the solar pv system, a voltage source converter for converting the boosted dc voltage into ac voltage and a utility grid in association with voltage source converter for transferring electrical power to different feeders and customers household. The equation for temperature voltage is given by the below equation: III. SIMULATION RESULTS The following results has been obtained by performing the specific coding in MATLAB simulation software. For improving the efficiency, basically two methods are compared. First one is perturb and observe MPPT method and second is hybrid MPPT method. By coupling both the MPPTs one by one with solar pv array the following characteristics has been viewed in the MATLAB simulation. International Journal of Recent Technology and Engineering (IJRTE) ISSN: 2277-3878, Volume-8, Issue-2S11, September 2019 Fig.3 represents the output power of a grid connected PV system with perturb and observe MPPT method. This characteristics shows the maximum power output obtained when using the perturb algorithm. It is clear from the above characteristic that the less power is obtained when using perturb MPPT method. Fig.4 represents the maximum output power of grid connected PV system with hybrid MPPT method. It is observed that when connecting the hybrid MPPT in place of perturb algorithm MPPT method the more reliable characteristics is formed. The output power obtained from hybrid MPPT method is much more than the output power obtained from perturb MPPT method. Fig.5 shows the duty cycle of the grid connected PV system with hybrid MPPT method. Duty cycle represents the total time of operation of the PV system. The characteristics obtained from the above figure is constant throughout. IV. CONCLUSION The grid connected solar PV system is successfully designed. For improving the efficiency of solar PV system, the maximum power point tracking system is connected to the solar PV array. The efficiency of solar PV system is tested by comparing the perturb MPPT method with hybrid MPPT method. It is observed from the above comparison that the maximum output is obtained from solar PV system by using the hybrid MPPT method in place of perturb MPPT method. Since the conversion efficiency from solar energy to electrical energy of solar PV system is very low, therefore due to this reason the use of MPPT plays a great role in improving the efficiency of solar PV system.
2019-11-07T15:30:09.560Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "72886ca619fe9a01615ac3eca8dcd1317d6932b1", "oa_license": null, "oa_url": "https://doi.org/10.35940/ijrte.b1220.0982s1119", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "c3041c86b2a15919e28bdb083c878f9050617066", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [] }
247631768
pes2o/s2orc
v3-fos-license
Identification of a Novel Osteogenetic Oligodeoxynucleotide (osteoDN) That Promotes Osteoblast Differentiation in a TLR9-Independent Manner Dysfunction of bone-forming cells, osteoblasts, is one of the causes of osteoporosis. Accumulating evidence has indicated that oligodeoxynucleotides (ODNs) designed from genome sequences have the potential to regulate osteogenic cell fate. Such osteogenetic ODNs (osteoDNs) targeting and activating osteoblasts can be the candidates of nucleic acid drugs for osteoporosis. In this study, the ODN library derived from the Lacticaseibacillus rhamnosus GG genome was screened to determine its osteogenetic effect on murine osteoblast cell line MC3T3-E1. An 18-base ODN, iSN40, was identified to enhance alkaline phosphatase activity of osteoblasts within 48 h. iSN40 also induced the expression of osteogenic genes such as Msx2, osterix, collagen type 1α, osteopontin, and osteocalcin. Eventually, iSN40 facilitated calcium deposition on osteoblasts at the late stage of differentiation. Intriguingly, the CpG motif within iSN40 was not required for its osteogenetic activity, indicating that iSN40 functions in a TLR9-independent manner. These data demonstrate that iSN40 serves as a novel osteogenetic ODN (osteoDN) that promotes osteoblast differentiation. iSN40 provides a potential seed of the nucleic acid drug that activating osteoblasts for osteoporosis therapy. Introduction The bone is a metabolically active organ that remodels continuously throughout life. However, bone remodeling function is impaired with aging, resulting in reduced bone mass and micro-architectural deterioration [1]. It finally causes osteoporosis and increases fracture risk, which is related to around 200 million people and annually 8.9 million fractures in the world [2]. One of the pathological events expressing osteoporosis is the dysfunction of bone-forming cells, osteoblasts. Differentiation from osteoblasts into osteocytes is regulated by canonical Wnt/β-catenin signaling pathway [3]. It stimulates two bone-specific transcription factors, Runx2 and osterix [4], which coordinately regulate the expression of osteogenic genes such as collagen type 1α, osteopontin, osteocalcin, and receptor activator of NF-κB ligand (RANKL) [5]. However, osteoblasts of elderly people present dysregulated expression of these bone-related genes [6][7][8], indicating the decreased bone-forming capacity of aged osteoblasts. Therefore, re-activation of osteoblasts is a key therapeutic strategy for osteoporosis. Several agents have been developed to regulate osteoblast function [9]. Teriparatide, a parathyroid hormone (PTH) fragment, directly targets PTH receptor on osteoblasts to induce osteoprotegerin and activate bone anabolism [10]. Denosumab, a decoy receptor for RANKL, increases bone mineralization by preventing bone resorption [11]. Currently, it is the most preferred treatment for post-menopausal osteoporosis [12]. Sclerostin is an inhibitor of Wnt signaling [13]. Thus, romosozumab, a sclerostin antibody, activates the Wnt pathway to stimulate bone formation [14]. Their beneficial effects in osteoporosis therapy have been robustly proven. However, with the current technology, these protein/peptidebased drugs are expensive and unstable to be produced in large quantities for the treatment of a large number of osteoporosis patients all over the world. To overcome this problem, other types of molecules targeting osteoblasts have been studied. As nucleic acids offer several advantages, such as chemical synthesis, low-cost manufacturing, and stability during storage, they are potential nanomolecules for next-generation drugs. A wide variety of nucleotides have been clinically applied; antisense nucleotides that regulate gene expression [15], aptamers that target proteins [16], and ligands of Toll-like receptors (TLRs) that modulate the innate immune system [17] can specifically access their diverse targets. Furthermore, some oligodeoxynucleotides (ODNs) have been reported to regulate osteogenic cell fate. A 27-base cytosine (C)-rich ODN designed from the human mitochondrial genome, named MT01 ([ACC CCC TCT] 3 ), was initially identified as an immunosuppressive ODN that inhibits the proliferation of human peripheral blood mononuclear cells [18]. Intriguingly, MT01 promoted the proliferation and differentiation of the human osteoblast-like cell line MG63 [19], rat bone marrow mesenchymal stem cells [20], and murine osteoblast cell line MC3T3-E1 [21] by upregulating Runx2 phosphorylation via activation of the ERK/MAPK pathway [22]. Thus, MT01 treatment reduced alveolar bone loss in rat periodontitis in vivo [20]. These studies suggest that genomic ODNs could be used as drug seeds for bone regeneration. For muscle, an ODN library designed from the genome sequence of the lactic acid bacterium Lacticaseibacillus rhamnosus GG [23] has been screened to determine its effect on myogenic cell fate. The 18-base guanine (G)-rich ODN, iSN04, was found to promote differentiation and suppress inflammation of myogenic precursor cells, myoblasts [24][25][26][27][28]. iSN04, termed myogenetic ODN (myoDN), formed a globular structure of 1-nm radius and served as an anti-nucleolin aptamer in a TLR-independent manner [24]. The discovery of myoDN demonstrates that bacterial genome sequences are promising platforms for providing novel ODNs that can control cell differentiation. To authenticate this concept, this study explored the ODN library used for myoDN study to identify an osteogenetic ODN (osteoDN) that facilitates the differentiation and calcification of osteoblasts. Chemicals ODN sequences used in this study are listed in Table S1. Phosphorothioated (PS) ODNs and 6-carboxyfluorecein (6-FAM)-conjugated PS-ODNs were synthesized and purified using HPLC (GeneDesign, Osaka, Japan). PS-ODNs and ascorbic acid (AA) (Fujifilm Wako Chemicals, Osaka, Japan) were dissolved in endotoxin-free water. An equal volume of solvent was used as negative control. Cell Culture MC3T3-E1 cell line (RCB1126) was provided by the RIKEN BRC (Tsukuba, Japan) through the Project for Realization of Regenerative Medicine and the National Bio-Resource Project of the MEXT, Japan. The cells were cultured at 37 • C with 5% CO 2 throughout the experiments. Undifferentiated MC3T3-E1 cells were maintained in a growth medium (GM) consisting of α-MEM (Nacalai, Osaka, Japan), 10% fetal bovine serum (FBS) (HyClone; GE Healthcare, Chicago, UT, USA), and a mixed solution of 100 units/mL penicillin and 100 µg/mL streptomycin (P/S) (Nacalai). To induce osteogenic differentiation, confluent MC3T3-E1 cells were cultured in a differentiation medium (DM) consisting of α-MEM, 10% FBS, 10 nM dexamethasone (Fujifilm Wako Chemicals), 10 mM β-glycerophosphate (Nacalai), and P/S. To facilitate differentiation, 15 or 50 µg/mL AA was added to GM or DM as required. Alkaline Phosphatase (ALP) Staining For screening, MC3T3-E1 cells were seeded on 96-well plates (1.0 × 10 4 cells/100 µL GM/well) for screening or on 12-well plates (1.0 × 10 4 cells/well) for high-resolution imaging. The next day, the medium was replaced with GM or DM containing 10 µM PS-ODNs. After 48 h, ALP enzymatic activity of the cells was visualized using ALP Stain Kit (Fujifilm Wako Chemicals) according to the manufacturer's instruction. Phase-contrast images were obtained using EVOS FL Auto microscope (AMAFD1000; Thermo Fisher Scientific, Waltham, MA, USA). ALP-positive area was quantified using ImageJ software version 1.52a (Wayne Rasband; National Institute of Health, Bethesda, MD, USA). Alizarin Staining MC3T3-E1 cells were seeded on 24-well plates (3.0-5.0 × 10 4 cells/well) and cultured in GM until they become confluent. The medium was then replaced with DM containing PS-ODNs every 3-4 days. The cells were fixed with 2% paraformaldehyde and stained with 1% w/v alizarin red S (Fujifilm Wako Chemicals). Bright-field images were captured using EVOS FL Auto microscope. Alizarin-positive area was quantified using ImageJ. Quantitative Real-Time RT-PCR (qPCR) MC3T3-E1 cells were seeded on 60-mm dishes (3.0 × 10 5 cells/dish). The next day, the medium was replaced with GM or DM containing 10 µM iSN40. Total RNA was isolated using NucleoSpin RNA Plus (Macherey-Nagel, Düren, Germany) and reverse transcribed using ReverTra Ace qPCR RT Master Mix (TOYOBO, Osaka, Japan). qPCR was performed using GoTaq qPCR Master Mix (Promega, Madison, WI, USA) with the StepOne Real-Time PCR System (Thermo Fisher Scientific). The amount of each transcript was normalized to that of the 3-monooxygenase/tryptophan 5-monooxygenase activation protein zeta gene (Ywhaz) and presented as fold-changes. The primer sequences are listed in Table S2. Trivial Trajectory Parallelization of Multicanonical Molecular Dynamics (TTP-McMD) Starting with the simulation of iSN40 and MT01 structures built from their DNA sequences using NAB in AmberTools [29], an enhanced ensemble method, TTP-McMD [30], was used to sample the equilibrated conformations at 310 K. In the TTP-McMD, energy range of the multicanonical ensemble covered a temperature range from 280 K to 380 K. Sixty trajectories were used, and the production run was conducted for 40 ns in each trajectory (total 2.4 µs). Throughout the simulation, amber ff12SB force field [31] was used, whereas the solvation effect was represented by the generalized-born model [32]. Statistical Analysis The results are presented as the mean ± standard error. Statistical comparisons between two groups were performed using unpaired two-tailed Student's t-test and among multiple groups using one-way analysis of variance followed by Scheffe's F-test. The statistical significance at p values are indicated in all the figures. iSN40 Promotes Osteoblast Differentiation Forty-four 18-base PS-ODNs designed from the Lacticaseibacillus rhamnosus GG genome (iSN04 and iSN08-iSN50) were administered to MC3T3-E1 cells. In addition, two immunomodulatory PS-ODNs were concomitantly tested: CpG-2006 is a TLR9 ligand that initiates inflammatory cascades [33], and Tel-ODN is a human telomeric ODN that sup-presses immunological reactions depending on TLR3/7/9 [34]. The osteogenic effects of these PS-ODNs were investigated by measuring ALP enzymatic activity in the cells, a standard marker of osteoblast differentiation ( Figure 1A). iSN40 (GGA ACG ATC CTC AAG CTT) markedly increased ALP-positive area to the same extent as AA, positive control for promoting osteogenic differentiation ( Figure 1B). However, other PS-ODNs did not enhance ALP activity, indicating sequence-dependent osteogenetic activity of iSN40. iSN40-induced ALP activity was reproducibly confirmed by high-resolution images of the experiment performed independently of the screening ( Figure S1). These results demonstrate that iSN40 serves as an osteoDN that promotes osteoblast differentiation. iSN40 Modulates Osteogenic Gene Expression The effect of iSN40 on osteogenic gene expression in MC3T3-E1 cells was investigated by qPCR. To analyze the early stage of differentiation, MC3T3-E1 cells were treated with iSN40 in GM containing AA for 24-48 h (Figure 2A). iSN40 increased the mRNA levels of Msx2, a homeobox transcription factor that promotes osteogenesis through bone iSN40 Modulates Osteogenic Gene Expression The effect of iSN40 on osteogenic gene expression in MC3T3-E1 cells was investigated by qPCR. To analyze the early stage of differentiation, MC3T3-E1 cells were treated with iSN40 in GM containing AA for 24-48 h (Figure 2A). iSN40 increased the mRNA levels of Msx2, a homeobox transcription factor that promotes osteogenesis through bone morphogenic protein 2-induced signaling. In contrast, iSN40 did not alter the levels of Runx2, a master regulator that determines osteoblast lineage and inhibits bone maturation [35]. Interestingly, iSN40 significantly upregulated the expression of osterix (Sp7), a downstream target of Runx2. iSN40 did not improve the mRNA levels of immature myoblast markers, collagen type 1α (Col1a1) and osteopontin (Spp1); however, iSN40 markedly induced osteocalcin (Bglap2), a hormone released from the bone [36]. To investigate the late stage of differentiation, MC3T3-E1 cells were treated with iSN40 in DM containing AA for 4-8 days ( Figure 2B). iSN40 significantly elevated the levels of Msx2 on day 4 but not that of Runx2. Notably, by day 8, iSN40 upregulated the expression of osterix, collagen type 1α, osteopontin, and osteocalcin, which act downstream of Runx2 [37]. Compared with the results at the early stage, the effect of iSN40 on these gene expression was more significant at the late stage, suggesting a long-term activity of iSN40. These data indicate that iSN40 facilitates osteoblast differentiation by modulating osteogenic gene expression through the transcription factor Msx2 rather than Runx2. Nanomaterials 2022, 12, x FOR PEER REVIEW 6 of 12 [35]. Interestingly, iSN40 significantly upregulated the expression of osterix (Sp7), a downstream target of Runx2. iSN40 did not improve the mRNA levels of immature myoblast markers, collagen type 1α (Col1a1) and osteopontin (Spp1); however, iSN40 markedly induced osteocalcin (Bglap2), a hormone released from the bone [36]. To investigate the late stage of differentiation, MC3T3-E1 cells were treated with iSN40 in DM containing AA for 4-8 days ( Figure 2B). iSN40 significantly elevated the levels of Msx2 on day 4 but not that of Runx2. Notably, by day 8, iSN40 upregulated the expression of osterix, collagen type 1α, osteopontin, and osteocalcin, which act downstream of Runx2 [37]. Compared with the results at the early stage, the effect of iSN40 on these gene expression was more significant at the late stage, suggesting a long-term activity of iSN40. These data indicate that iSN40 facilitates osteoblast differentiation by modulating osteogenic gene expression through the transcription factor Msx2 rather than Runx2. iSN40 Promotes Osteoblast Calcification To investigate the effect of iSN40 on osteoblast calcification, MC3T3-E1 cells were treated with iSN40 in DM with or without AA for 12 days and then subjected to alizarin staining to visualize calcium deposition on the cells ( Figure 3A). Alizarin-positive area was significantly increased by 10 μM iSN40 rather than by 15 μg/mL AA. Moreover, compared with individual treatments, co-treatment iSN40 Promotes Osteoblast Calcification To investigate the effect of iSN40 on osteoblast calcification, MC3T3-E1 cells were treated with iSN40 in DM with or without AA for 12 days and then subjected to alizarin staining to visualize calcium deposition on the cells ( Figure 3A). Alizarin-positive area was significantly increased by 10 µM iSN40 rather than by 15 µg/mL AA. Moreover, compared with individual treatments, co-treatment with iSN40 and AA further facilitated calcification, indicating that iSN40 and AA synergistically promote osteoblast mineralization. Although iSN40 was administered at a concentration of 10 µM in the above experiments, the dose response analysis revealed that 1 µM iSN40 was sufficient to induce osteoblast calcification in the presence of 15 µg/mL AA ( Figure 3B). We used phosphorothioated iSN40 (PS-iSN40) in this study to avoid degradation by nucleases, whereas native iSN40 did not promote calcification even in the presence of 50 µg/mL AA ( Figure S2). This demonstrates that phosphorothioation is necessary for iSN40 to function as an osteoDN. Nanomaterials 2022, 12, x FOR PEER REVIEW 7 of 12 in the above experiments, the dose response analysis revealed that 1 μM iSN40 was sufficient to induce osteoblast calcification in the presence of 15 μg/mL AA ( Figure 3B). We used phosphorothioated iSN40 (PS-iSN40) in this study to avoid degradation by nucleases, whereas native iSN40 did not promote calcification even in the presence of 50 μg/mL AA ( Figure S2). This demonstrates that phosphorothioation is necessary for iSN40 to function as an osteoDN. Osteogenetic Action of iSN40 Is TLR9-Independent iSN40 (GGA ACG ATC CTC AAG CTT) contained a CpG motif. It has been studied that ODNs possessing unmethylated CpG motifs (CpG-ODNs) serve as TLR9 ligands and initiate innate immune system-induced inflammatory responses [38]. To examine the impact of the CpG motif within iSN40 on its osteogenetic activity, iSN40-GC (GGA AGC ATC CTC AAG CTT) was constructed, in which the CpG motif was substituted with GC. iSN40-GC enhanced ALP activity and calcification of MC3T3-E1 cells to the same extent as iSN40 (Figure 4). Conversely, a well-known TLR9 ligand, CpG-2006 [33], did not facilitate osteoblast mineralization ( Figure 4B), which was further confirmed through ALP staining results (Figures 1 and S1). In addition, RT-PCR revealed that MC3T3-E1 cells do not express TLR9 ( Figure S3), corresponding to the previous study [39]. These results demonstrate that osteogenetic action of iSN40 is independent of its CpG motif and TLR9. Osteogenetic Action of iSN40 Is TLR9-Independent iSN40 (GGA ACG ATC CTC AAG CTT) contained a CpG motif. It has been studied that ODNs possessing unmethylated CpG motifs (CpG-ODNs) serve as TLR9 ligands and initiate innate immune system-induced inflammatory responses [38]. To examine the impact of the CpG motif within iSN40 on its osteogenetic activity, iSN40-GC (GGA AGC ATC CTC AAG CTT) was constructed, in which the CpG motif was substituted with GC. iSN40-GC enhanced ALP activity and calcification of MC3T3-E1 cells to the same extent as iSN40 ( Figure 4). Conversely, a well-known TLR9 ligand, CpG-2006 [33], did not facilitate osteoblast mineralization ( Figure 4B), which was further confirmed through ALP staining results (Figure 1 and Figure S1). In addition, RT-PCR revealed that MC3T3-E1 cells do not express TLR9 ( Figure S3), corresponding to the previous study [39]. These results demonstrate that osteogenetic action of iSN40 is independent of its CpG motif and TLR9. Discussion The present study identified that iSN40, which is an 18-base ODN derived from the genome sequence of lactic acid bacteria, serves as an osteoDN that promotes differentiation and mineralization of osteoblasts. It could be the candidate of nucleic acid drug activating osteoblasts for osteoporosis. Although a direct target and action mechanism of iSN40 are unknown, the study results revealed that the osteogenetic function of iSN40 is TLR9-independent. The CpG motif within iSN40 was not required for its activity; A validated TLR9 ligand, CpG-2006 [33], enhanced neither ALP activity nor calcification of MC3T3-E1 cells; moreover, TLR9 was not expressed in MC3T3-E1 cells. TLR9-independent effect of iSN40 is crucial for its clinical application. The innate immune system, including IL-6 secretion, is closely related to bone homeostasis [40]. For example, CpG-ODNs generally upregulate interleukin (IL)-6 production via TLR9 [17]. IL-6 is a pro-osteoclastogenic cytokine [3] that is highly produced in aged osteoblasts [7]. Therefore, TLR9-independent osteogenetic activity of iSN40 is a favorable characteristic for osteoporosis therapy. However, it is still ambiguous whether iSN40 never serves as a CpG-ODN even in TLR9-expressing cells. Although human osteoblasts and bone-derived mesenchymal stem cells do not express TLR9 as MC3T3-E1 cells do not [41], other types of cells existing in bone tissue such as osteoclasts express TLR9 as a mediator of bone metabolism [42]. Thus, immunological and pro-inflammatory effects of iSN40 on TLR9-expressing cells in bone tissue need to be elucidated for future application in clinical settings. iSN40 upregulated the expression of osteogenic genes such as osterix, collagen type Iα, osteopontin, and osteocalcin, which are induced by Runx2 [37]. However, iSN40 did not alter Runx2 expression. The transcriptional activity of Runx2 is regulated via posttranscriptional modifications including phosphorylation, acetylation, sumoylation, and Discussion The present study identified that iSN40, which is an 18-base ODN derived from the genome sequence of lactic acid bacteria, serves as an osteoDN that promotes differentiation and mineralization of osteoblasts. It could be the candidate of nucleic acid drug activating osteoblasts for osteoporosis. Although a direct target and action mechanism of iSN40 are unknown, the study results revealed that the osteogenetic function of iSN40 is TLR9independent. The CpG motif within iSN40 was not required for its activity; A validated TLR9 ligand, CpG-2006 [33], enhanced neither ALP activity nor calcification of MC3T3-E1 cells; moreover, TLR9 was not expressed in MC3T3-E1 cells. TLR9-independent effect of iSN40 is crucial for its clinical application. The innate immune system, including IL-6 secretion, is closely related to bone homeostasis [40]. For example, CpG-ODNs generally upregulate interleukin (IL)-6 production via TLR9 [17]. IL-6 is a pro-osteoclastogenic cytokine [3] that is highly produced in aged osteoblasts [7]. Therefore, TLR9-independent osteogenetic activity of iSN40 is a favorable characteristic for osteoporosis therapy. However, it is still ambiguous whether iSN40 never serves as a CpG-ODN even in TLR9-expressing cells. Although human osteoblasts and bone-derived mesenchymal stem cells do not express TLR9 as MC3T3-E1 cells do not [41], other types of cells existing in bone tissue such as osteoclasts express TLR9 as a mediator of bone metabolism [42]. Thus, immunological and pro-inflammatory effects of iSN40 on TLR9-expressing cells in bone tissue need to be elucidated for future application in clinical settings. iSN40 upregulated the expression of osteogenic genes such as osterix, collagen type Iα, osteopontin, and osteocalcin, which are induced by Runx2 [37]. However, iSN40 did not alter Runx2 expression. The transcriptional activity of Runx2 is regulated via posttranscriptional modifications including phosphorylation, acetylation, sumoylation, and ubiquitination, which are mediated by numerous factors [43]. iSN40 may target such modulators to enhance Runx2 activity. For instance, another osteoDN, MT01, has been reported to enhance Runx2 phosphorylation via activation of ERK and p38 MAPK [22], which potentiates the Runx2/osterix transcriptional machinery [44]. The effect of iSN40 on Runx2 protein needs to be examined in further studies. iSN41-iSN47 are PS-ODNs analogous to iSN40; however, they did not enhance ALP activity of MC3T3-E1 cells. When iSN40 operates as an antisense nucleotide, iSN41-iSN47 would present partial osteogenetic activities. To discuss the mechanism of action of iSN40, a previous study reporting iSN04 is helpful. iSN04 is designed from the lactic acid bacteria, genome as same as iSN40, and serves as a myoDN that promotes myoblast differentiation in a TLR-independent manner [24]. iSN04 is taken up into cytoplasm without any carriers, forms a G-stacking structure, and physically interacts with nucleolin to interfere its function, indicating that iSN04 is an anti-nucleolin aptamer [24]. Thus, it is sufficiently possible that iSN40 works as an aptamer for target proteins. Administration of 6-FAM-conjugated iSN40 and iSN40-GC to MC3T3-E1 cells showed that they were autonomously incorporated into the cytoplasm within 30 min ( Figure 5), suggesting that iSN40 acts intracellularly, not on the plasma membrane. Nanomaterials 2022, 12, x FOR PEER REVIEW 9 of 12 ubiquitination, which are mediated by numerous factors [43]. iSN40 may target such modulators to enhance Runx2 activity. For instance, another osteoDN, MT01, has been reported to enhance Runx2 phosphorylation via activation of ERK and p38 MAPK [22], which potentiates the Runx2/osterix transcriptional machinery [44]. The effect of iSN40 on Runx2 protein needs to be examined in further studies. iSN41-iSN47 are PS-ODNs analogous to iSN40; however, they did not enhance ALP activity of MC3T3-E1 cells. When iSN40 operates as an antisense nucleotide, iSN41-iSN47 would present partial osteogenetic activities. To discuss the mechanism of action of iSN40, a previous study reporting iSN04 is helpful. iSN04 is designed from the lactic acid bacteria, genome as same as iSN40, and serves as a myoDN that promotes myoblast differentiation in a TLR-independent manner [24]. iSN04 is taken up into cytoplasm without any carriers, forms a G-stacking structure, and physically interacts with nucleolin to interfere its function, indicating that iSN04 is an anti-nucleolin aptamer [24]. Thus, it is sufficiently possible that iSN40 works as an aptamer for target proteins. Administration of 6-FAM-conjugated iSN40 and iSN40-GC to MC3T3-E1 cells showed that they were autonomously incorporated into the cytoplasm within 30 min ( Figure 5), suggesting that iSN40 acts intracellularly, not on the plasma membrane. . To further discuss the potential of iSN40 as an aptamer, conformation of iSN40 in water at 310 K was simulated using the TTP-McMD ( Figure 6A). iSN40 exhibited a compact globular structure (average radius: 1.01 nm) similar to that of iSN04 [24]. Interestingly, the predicted structure of another osteoDN, MT01, was substantially different from that of iSN40 ( Figure 6B), suggesting that the action mechanisms of iSN40 and MT01 are likely to be dissimilar. Indeed, iSN40 did not alter the mRNA levels of Runx2, but MT01 significantly induced Runx2 transcription [19,20]. To determine whether iSN40 is an aptamer, the iSN40-binding protein in osteoblasts needs to be identified. This would further promote the development and application of iSN40 as an osteoDN for bone regeneration. To further discuss the potential of iSN40 as an aptamer, conformation of iSN40 in water at 310 K was simulated using the TTP-McMD ( Figure 6A). iSN40 exhibited a compact globular structure (average radius: 1.01 nm) similar to that of iSN04 [24]. Interestingly, the predicted structure of another osteoDN, MT01, was substantially different from that of iSN40 ( Figure 6B), suggesting that the action mechanisms of iSN40 and MT01 are likely to be dissimilar. Indeed, iSN40 did not alter the mRNA levels of Runx2, but MT01 significantly induced Runx2 transcription [19,20]. To determine whether iSN40 is an aptamer, the iSN40binding protein in osteoblasts needs to be identified. This would further promote the development and application of iSN40 as an osteoDN for bone regeneration. Nanomaterials 2022, 12, x FOR PEER REVIEW 9 of 12 ubiquitination, which are mediated by numerous factors [43]. iSN40 may target such modulators to enhance Runx2 activity. For instance, another osteoDN, MT01, has been reported to enhance Runx2 phosphorylation via activation of ERK and p38 MAPK [22], which potentiates the Runx2/osterix transcriptional machinery [44]. The effect of iSN40 on Runx2 protein needs to be examined in further studies. iSN41-iSN47 are PS-ODNs analogous to iSN40; however, they did not enhance ALP activity of MC3T3-E1 cells. When iSN40 operates as an antisense nucleotide, iSN41-iSN47 would present partial osteogenetic activities. To discuss the mechanism of action of iSN40, a previous study reporting iSN04 is helpful. iSN04 is designed from the lactic acid bacteria, genome as same as iSN40, and serves as a myoDN that promotes myoblast differentiation in a TLR-independent manner [24]. iSN04 is taken up into cytoplasm without any carriers, forms a G-stacking structure, and physically interacts with nucleolin to interfere its function, indicating that iSN04 is an anti-nucleolin aptamer [24]. Thus, it is sufficiently possible that iSN40 works as an aptamer for target proteins. Administration of 6-FAM-conjugated iSN40 and iSN40-GC to MC3T3-E1 cells showed that they were autonomously incorporated into the cytoplasm within 30 min ( Figure 5), suggesting that iSN40 acts intracellularly, not on the plasma membrane. . To further discuss the potential of iSN40 as an aptamer, conformation of iSN40 in water at 310 K was simulated using the TTP-McMD ( Figure 6A). iSN40 exhibited a compact globular structure (average radius: 1.01 nm) similar to that of iSN04 [24]. Interestingly, the predicted structure of another osteoDN, MT01, was substantially different from that of iSN40 ( Figure 6B), suggesting that the action mechanisms of iSN40 and MT01 are likely to be dissimilar. Indeed, iSN40 did not alter the mRNA levels of Runx2, but MT01 significantly induced Runx2 transcription [19,20]. To determine whether iSN40 is an aptamer, the iSN40-binding protein in osteoblasts needs to be identified. This would further promote the development and application of iSN40 as an osteoDN for bone regeneration. Conclusions The present study successfully identified a novel osteoDN, iSN40, which is an 18-base ODN designed from the lactic acid bacteria genome. iSN40 promoted the differentiation and calcification of osteoblasts by modulating osteogenic gene expression in a TLR9independent manner. It demonstrated that the ODN library derived from bacteria genome can be a platform to discover the ODN targeting and regulating osteoblasts. Activation of osteoblasts by ODNs would provide an alternative strategy for osteoporosis therapy by promoting bone formation and mineralization. Patents Shinshu University has been assigned the invention of osteoDN by T.T., Y.N., K.U., and T.S., and Japan Patent Application 2021-122713 has been filed on 27 July 2021. Data Availability Statement: The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
2022-03-25T13:28:07.285Z
2022-03-22T00:00:00.000
{ "year": 2022, "sha1": "293898a65cf7fa441af68414267494b2c5ab814a", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-4991/12/10/1680/pdf?version=1652523627", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8ca2376631209612edd00a45f097604bf4482b32", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
129011096
pes2o/s2orc
v3-fos-license
Atmospheric CO 2 seasonality and the air-sea flux of CO 2 Introduction Conclusions References . Variability in the strength of these sinks is known to be large, but is presently poorly understood (e.g., Watson et al., 2009;Le Quere et al., 2007).Photosynthesis within the large, and extensively vegetated, land masses of North America and Eurasia drive a major spring-summer CO 2 uptake in the Northern Hemisphere, with leaf-fall and breakdown causing a compensating CO 2 release during autumn and winter (Machta et al., 1977).In high northern latitudes the amplitude of the CO atm 2 seasonal cycle presently exceeds 15 ppm (Peters et al., 2007), and has been increasing significantly in an apparent response to rising atmospheric CO 2 concentrations (+40 % between 1960's and 1990's) (Keeling et al., 1996).Seasonality changes are also a potential driver of glacial interglacial cyclicity (e.g., Gildor and Tziperman, 2000;Denton et al., 2005), as suggested by the close statistical coupling between variability in the earth's obliquity, and glacial terminations (Huybers and Wunsch, 2005).High obliquity means high intra-annual variability in high-latitude insolation, and appears to be a precondition for deglaciation (Loutre et al., 2004;Huybers and Wunsch, 2005;Liu et al., 2008).To understand past change, and robustly investigate potential future change, we must therefore understand interactions between seasonality and the global carbon cycle.In this paper I present two novel mechanisms linking changes in seasonality with changes in the magnitude of oceanic CO 2 sources and sinks. Methods Pre-industrial climate simulations (i.e.simulations without prescribed exernal forcings) were undertaken using the earth system model HadGEM2-ES, with fully interactive and coupled ocean and terrestrial biogeochemistry (Martin et al., 2011;Collins et al., 2011), and a fully coupled and well validated sea-ice component (McLaren et al., 2006).Improvements to the leaf phenology within the model now allow for good reproduction of the CO atm 2 seasonal cycle (Collins et al., 2008), and improvements to the physical and ocean-biogeochemistry model lead to good agreement between the modelled and observed spatial pattern of air-sea CO 2 flux (Fig. 1).Introduction Conclusions References Tables Figures Back Close Full Within the HadGEM2-ES model, air-sea carbon fluxes are calculated as a function of the atmospheric and ocean CO 2 concentrations, the seawater temperature and salinity (influencing the CO 2 solubility and the transfer velocity), windspeed, and sea-ice concentration, as described for a previous model version by Palmer and Totterdell 2001.To allow me to calculate the impact on air-sea fluxes of changes in the seasonal cycle of atmospheric CO 2 concentrations, without the need to spin up new model simulations to equilibrium with those seasonal cycles, I have extracted the physical and chemical fields described above calculated for the model's pre-industrial state, and undertaken air-sea CO 2 flux calculations offline.Within my offline calculations, I have varied the values relating to the atmospheric CO 2 concentration, to simulate an increased or decreased CO atm 2 seasonal cycle magnitude.The different magnitude CO atm 2 seasonal cycles were calculated by multiplying the difference between individual monthly CO atm 2 values and the annual average CO atm 2 concentrations in each latitude-longtitude box, by the specified factor (zero, one or two), and adding that to the anual mean value at that point.Within this paper, "1x seasonal cycle" therefore refers to the seasonality simulated for the preindustrial period within the model, "0x seasonal cycle" refers to a situation without any temporal variability, and "2x seasonal cycle" considers the annual variability at any each point to be twice that simulated for the preindustrial period.The calculations undertaken for this study vary from those which would be undertaken within the model only in that, annual, rather than daily mean, values for physical and chemical fields where used, and the surface wind field was calculated from wind mixing energy rather than taking values directly from the model's atmosphere.In all situations, using monthly mean values instead of continuous seasonal cycles will decrease the seasonal cycle's total magnitude, and therefore the magnitude of the results can be considered conservative.The result of these calculations will be an instantaneous value for the air-sea CO 2 flux immediately after the seasonality change, rather than an estimation of the net ocean-atmophere carbon exchange occuring in response to a change in CO atm Introduction Conclusions References Tables Figures Back Close Full Results Investigating the consequences of artificially doubling the amplitude of the seasonal cycle simulated within a pre-industrial earth-system-model (HadGEM2-ES (Martin et al., 2011;Jones et al., 2011;Collins et al., 2008)) climate, and calculating the instantaneous change in the air-sea flux of CO 2 , I find two contrasting impacts.The first is a relative out-gassing from the high latitude oceans, the second is a relative in-gassing from the mid-to-high-latitude oceans (Fig. 2).These two effects will initially be considered separately. In the high latitudes, particularly of the Northern Hemisphere, the seasonal cycles of CO atm 2 concentration and sea-ice extent vary approximately in phase.The reason for the synchronous change is that they share a common driving mechanism, light.During the winter months, little light is available for either photosynthesis or heating of the surface ocean; vegetation growth therefore slows or vegetation dies back causing a net release of CO 2 , and (due to the cooling) sea-ice forms.Conversely, in the spring and summer, vegetation begins to draw-down CO 2 from the atmosphere, and, as more light reaches the ocean, sea-ice begins to melt.Making the first order assumption that sea-ice is impermeable to CO 2 (although some CO 2 flux through continuous ice has been observed (e.g., Zemmelink et al., 2006)), the average CO atm 2 concentration that the ocean sees is reduced relative to its full annual average (Fig. 3). In the mid-to-high latitudes, equatorward of the maximum seasonal extent of sea-ice, I find that the change in air-sea CO 2 flux resulting from a change in CO atm 2 seasonality is driven by the intra-annual variability in CO 2 solubility.In common with the sea-ice mechanism, in the mid-to-high latitudes, particularly in the Northern Hemisphere, the seasonal cycle of CO 2 solubility varies approximately in phase with the seasonal cycle of CO atm 2 concentration (Fig. 4).The exchange of CO 2 between the atmosphere and the ocean occurs to bring the concentrations in each media towards equilibrium.Air-sea CO 2 concentration equilibrium occurs between the partial pressure of CO 2 in seawater multiplied by the solubility of CO 2 in that seawater, and the CO Introduction Conclusions References Tables Figures Back Close Full multiplied by the solubility of CO 2 in the seawater below.The solubility of CO 2 in seawater is a function of seawater temperature and, to a lesser degree, salinity.Again, CO 2 solubility and the CO atm 2 concentration in the high (particularly northern) latitudes share a common driver, incident light (and therefore heat).Having a high CO 2 solubility when the CO atm 2 concentration is high, and a low CO 2 solubility when the CO atm 2 concentration is low, and both CO 2 concentration and solubility always having positive values, an increase in the CO atm 2 seasonal cycle magnitude will cause an increase in the annually averaged product of the two quantities (Fig. 5).Raising the CO atm 2 concentration in potential equilibrium with seawater will cause an increased CO 2 gradient into the ocean, and promote a relative increase in the air-to-sea CO 2 flux. Individually, neither of these two effects are likely to have a large impact on CO atm 2 concentrations.Assuming no feedbacks operate, after a change in seasonal cycle amplitude the new equilibrium CO atm 2 concentration would be unlikely to shift by more than the high latitude CO 2 seasonal cycle amplitude change.Considering the sea-ice mechanism, the dashed black line in Fig. 3 represents the average atmospheric CO 2 concentration initially in equilibrium with the underlying seawater, the red dashed line then represents the average atmospheric CO 2 concentration after a change in seasonal cycle magnitude, but prior to reaching a new air-sea equilibrium.The ocean will undergo a relative release of CO 2 to the atmosphere until the atmospheric CO 2 concentration average over the ice-free period is in equilibrium with the seawater again. Assuming the air-sea CO 2 flux change has only a negligible effect on oceanic CO 2 concentrations, the new equilibrium would be reached when the dashed red line had raised to the concentration of the dashed black line.The problem with this reasoning is that the present high-northern-latitude ocean is not in equilibrium with the atmosphere, and is still taking up CO 2 until it sinks (Takahashi et al., 2009).The steady-state atmospheric CO 2 concentration resulting from a change in the magnitude of the CO of CO 2 resulting from a seasonal cycle shift under various conditions must therefore be quantified using coupled earth system models.A further caveat is that the seasonal melting of sea-ice can leave behind a stratified low-salinity lid which could limit the volume of water, and modify the chemistry of that water, which may come into equilibrium with the atmosphere (Cai et al., 2010).The potential supression of air-sea exchange in heavily stratified seasonally ice-covered waters may weight the net air-sea flux change towards that occuring in response to changes in the solubility mechanism.Despite, under a pre-industrial climate configuration, being unlikely to alter global CO atm 2 concentrations by more then a few ppm, the consequences of the operation of the sea-ice and solubility mechanisms under changing CO atm 2 seasonal cycle amplitude have important implications spatially and temporally.Firstly, an increase in the CO atm 2 seasonal cycle magnitude and a decrease in sea-ice extent (Intergovernmental Panel on Climate Change, 2007) over the coming decades, will cause the seasonalsolubility driven oceanic CO 2 uptake to increase and move to higher latitudes, and an increased intensity but reduced area, of seasonal-sea-ice driven relative out-gassing at the highest latitudes.These shifts will impact the location and magnitude of high latitude ocean acidification (Steinacher et al., 2009).Secondly, abrupt changes in the terrestrial biosphere, whether through natural variability (e.g.drought) or anthropogenic land-use change (e.g.deforestation), without necessarily impacting the annually averaged CO atm 2 concentration, could drive significant step changes or inter-annual variability in the high-latitude air-sea flux of CO 2 .It is possible that year-to-year changes in the CO atm 2 seasonal cycle, rather than in annually averaged CO 2 concentrations, could accounting for the observed, but largely unexplained high latitude air-sea CO 2 flux variability (e.g. Watson et al., 2009;Le Quere et al., 2007).Finally, the combination of these mechanisms have potentially interesting implications for glacial-interglacial cycling. As previously discussed, the termination of glacial periods tends to occur at times of high orbital obliquity (Huybers and Wunsch, 2005).High obliquity causes increased high latitude insolation seasonality.A large seasonal cycle in high latitude insolation will cause high seasonal variability in sea-ice cover and high seasonal variability in Introduction Conclusions References Tables Figures Back Close Full CO 2 uptake and release by the terrestrial biosphere.The balance between the two identified mechanisms will be sensitive to the maximum annual sea-ice extent.When sea-ice is extensive, the described sea-ice mechanism dominates over the described solubility mechanism, and vice versa.During warm periods, with high seasonality, the solubility mechanism will be strong, and will pump CO 2 into the ocean, whereas during cold periods of high seasonality, the dominant mechanism will switch, and (in a relative sense) pump CO 2 into the atmosphere.The combination of these mechanisms would potentially allow the termination of a glacial to skip a number of periods of high obliquity, as is observed (Huybers and Wunsch, 2005), and only induce CO 2 release, potentially triggering warming feedbacks (e.g.Gildor and Tziperman, 2000) and deglaciation, once the glacial world has cooled enough to reach a sea-ice determined threshold.Given a slow cooling, and therefore a slow increase in maximum sea-ice extent, this mechanism also offers a possible explanation for the switch between a ∼40 kyr and ∼100 kyr glacial-interglacial periodicity at around 900 kyr before present (Raymo and Nisancioglu, 2003).To test whether the combination of these mechanisms could contribute to the timing of glacial terminations would require an appreciation of the spatial and intra-annual variability in glacial CO atm 2 concentrations.The limited temporal resolution of most palaeoclimate proxies makes this a difficult, but potentially important, challenge.Furthermore, it will be important to understand the past spatial and sub-annual variability in CO 2 solubility and sea-ice extent.Although a uniform increase in CO 2 solubility under a colder climate would make little difference to the mechanisms discussed, changes in the distribution of the reduced solubility due to reorganisation of ocean circulation may significantly amplify or reduce the climatic importance of this mechanism.Once validated for a glacial world, earth system models containing fully interactive and coupled carbon-cycle components, could be run to test whether the described mechanisms could play a role in the pacing of obliquity-driven glacial terminations.Introduction Conclusions References Tables Figures Back Close Full Presently, the seasonal cycle in earth system model CO atm 2 concentrations is often considered a way of diagnosing and benchmarking changes in model terrestrial net primary production (e.g.Cadule et al., 2010), rather than as critical prognostic variability in its own right.Given the potential sensitivity of the climate system to seasonalitydrive changes in the air-sea flux of CO 2 , it is important that the terrestrial biosphere components of earth system models consider the seasonal cycle of CO atm 2 as a critical component of the model, and develop, validate, and explore models accordingly. Given the findings presented here, care must be taken when analysing model experiments where the CO atm 2 concentration is prescribed without either seasonal or spatial variability.Experiments of this design will contribute heavily to the conclusions reached in the IPCC's 5th climate assessment (Taylor et al., 2009).Although it will be possible to diagnose the seasonal carbon fluxes simulated within these model experiments, the individual model carbon-cycle components will not be able to feed back on each other through the mechanisms described.It is therefore imperative that fully coupled carbon cycle simulations are run and explored to quantify the feedbacks missing from the main body of simulations, and if nothing else, to show that feedbacks, such as those discussed here, are small.The feedbacks described here may help us understand the complex issue of high-latitude ocean CO 2 uptake in response to retreating sea-ice extent. Conclusions Despite being a prominent and dynamic feature of the carbon cycle, the climatic influence of the CO atm between the CO atm 2 and seawater CO 2 solubility seasonal cycles.The operation of the described mechanisms allows the net air-sea flux sign to switch depending on the maximum sea-ice extent, making the combination of these mechanisms of particular relevance to contemporary and glacial-interglacial climate change.One aspect of the seasonality and air-sea flux feedback mechanisms not discussed here is that of shifts in seasonal cycle phase, rather than amplitude.Various mechanisms, such as changing precipitation pattern or intensity, could shift the CO atm 2 cycle phase independently from the sea-ice and seawater CO 2 solubility cyclicity.The theory behind how changing CO 2 seasonal cycle amplitude impacts on the CO 2 air-sea flux, presented here, can equally be used to understand the response of the air-sea CO 2 flux to changing CO atm 2 seasonal cycle phase.A relative shift in the phase of the seasonal cycles of CO atm 2 concentration, and sea-ice extent or seawater CO 2 solubility may have the capacity to produce much larger changes in ocean in/out-gassing than changes in CO atm 2 concentration seasonal cycle amplitude.I have made no attempt in this study to quantify the magnitude of the highlighted carbon cycle feedbacks at steady state.To determine whether the described mechanisms could play a significant role in past or future carbon cycle change will require the spin up of (ideally) fully coupled earth system models to equilibrium with different amplitude (and potentially phase) atmospheric CO 2 seasonal cycles.If shown to be significant, the climatic impact of the seasonality-driven carbon cycle response must then be quantified.Introduction Conclusions References Tables Figures Conclusions References Tables Figures Back Close Full (Peters et al., 2007), as present in http://www.esrl.noaa.gov/gmd/ccgg/carbontracker on 11/8/2010.Atmospheric CO 2 observations were detrended using a third-order polynomial, fitted (using the least squares method) to all observations at each individual site.Detrended data were averaged into a typical annual cycle, then the correlation coefficient calculated between the 12 average months of atmospheric CO 2 data and 12 months of latitudinally averaged CO 2 solubility, calculated at the latitude corresponding to the relevant atmospheric CO 2 measurement site.CO 2 solubilities in seawater were calculated from World Ocean Atlas 2009 surface temperature and salinity climatologies (Locarnini et al., 2009;Antonov et al., 2009).The solid line depicts a cubic-spline interpolated 6-point moving average through all of the data.High correlation values indicate that the annual cycles in atmospheric CO 2 concentration and CO 2 solubility in seawater vary in phase at that latitude.Introduction Conclusions References Tables Figures Back Close Full 1 Introduction Rapid cooling, high biological activity, strong winds, and the pumping of surface waters to depth, make the high latitude oceans the planet's major atmospheric CO 2 Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | would therefore be a function of (to a first order) circulation, temperature change with latitude, the wind-driven rate of air-sea CO 2 exchange, time, and the spatial pattern of air-sea flux change.The net change in the air-sea flux Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | 2 concentration seasonal cycle has, to the best of my knowledge, not previously been explored.I demonstrate that changes in the amplitude of the CO atm 2 seasonal cycle can impact the mid-to-high latitude air-sea flux of CO 2 .In seasonally ice-covered waters, the air-sea flux change occurs as a consequence of the synchronicity between the CO atm 2 and sea-ice seasonal cycle.Equatorward of the maximum sea-ice extent, the change in air-sea flux occurs as a result of synchronicity Introduction Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | J., Steinhoff, T., Telszewski, M., Rios, A. F., Wallace, D. W. R., and Wanninkhof, R.: Tracking the Variable North Atlantic Sink for Atmospheric CO 2 , Science to-sea CO 2 flux (Takihashi et al. 2009) model ensemble mean 1980-2000 avg.model ensemble member 1980-2000 avg.omparison of latitudinally averaged global air-sea CO 2 fluxes calculated from an n-derived climatology (Takahashi et al., 2009) (black), and from climate simulations us-dGEM2-ES model (red).Earth System model simulations were forced using greenhouse opogenic aerosol, volcanic aerosol, land-use change and solar cycle data from from the -2005 (Taylor et al., 2009; Jones et al., 2011).The observation-based climatology has lated to represent the conditions in the year 2000, but to avoid sampling internal varie model results have been presented as a mean value from the years 1980-2000.The Fig. 1 .Fig. 2 . Fig. 1.Comparison of latitudinally averaged global air-sea CO 2 fluxes calculated from an observation-derived climatology (Takahashi et al., 2009) (black), and from climate simulations using the HadGEM2-ES model (red).Earth System model simulations were forced using greenhouse gas, anthropogenic aerosol, volcanic aerosol, land-use change and solar cycle data from the years 1860-2005 (Taylor et al., 2009; Jones et al., 2011).The observation-based climatology has been calculated to represent the conditions in the year 2000, but to avoid sampling internal variability, the model results have been presented as a mean value from the years 1980-2000.The model's ensemble mean has been calculated as the average of three historical simulations started at 50 yr intervals from the preindustrial control simulation, and therefore considered to sample well the model's internal variability. Fig. 3 . Fig. 3.Explanation of how a change in the magnitude of the atmospheric CO 2 seasonal cycle can change the air-sea CO 2 flux in seasonally sea-ice covered waters.During the year, high atmospheric CO 2 concentrations occur around the time of maximum ice-cover, and are therefore prevented from exchanging freely with the ocean, whereas at times of low atmospheric CO 2 concentration, there exists no barrier to exchange.The result of this synchronicity between the seasonal cycles of atmospheric CO 2 concentration and sea-ice extent is that the annually averaged atmospheric CO 2 concentration seen by the ocean is reduced relative to that its full annual mean value, as the amplitude of the atmospheric CO 2 seasonal cycle is increased.The solid black and red curves represent the idealised annual cycles in atmospheric CO 2 concentration at one and two times the seasonal cycle amplitude respectively.The dotted black line represents the full annually averaged atmospheric CO 2 concentration.The dashed black and red lines represent the partial average of atmospheric CO 2 concentrations for the two different seasonal cycle amplitudes, over the ice-free period. Fig. 4 . Fig. 4.Latitudinal dependence of the phase synchronicity between the seasonal cycle of atmospheric CO 2 concentrations and the solubility of CO 2 in seawater.Correlation coefficients between observed monthly averaged atmospheric CO 2 concentration seasonal cyclicity and the calculated monthly averaged seasonal cycle of CO 2 solubility in seawater are plotted against observation latitude.Points relate to all CarbonTracker flask measurement sites containing at least five years worth of data(Peters et al., 2007), as present in http://www.esrl.noaa.gov/gmd/ccgg/carbontracker on 11/8/2010.Atmospheric CO 2 observations were detrended using a third-order polynomial, fitted (using the least squares method) to all observations at each individual site.Detrended data were averaged into a typical annual cycle, then the correlation coefficient calculated between the 12 average months of atmospheric CO 2 data and 12 months of latitudinally averaged CO 2 solubility, calculated at the latitude corresponding to the relevant atmospheric CO 2 measurement site.CO 2 solubilities in seawater were calculated from World Ocean Atlas 2009 surface temperature and salinity climatologies(Locarnini et al., 2009;Antonov et al., 2009).The solid line depicts a cubic-spline interpolated 6-point moving average through all of the data.High correlation values indicate that the annual cycles in atmospheric CO 2 concentration and CO 2 solubility in seawater vary in phase at that latitude. Fig. 5 . Fig. 5.Cartoon explaining how a change in the magnitude of the atmospheric CO 2 seasonal cycle can change the air-sea CO 2 flux due to coupling between the seasonal cycles of seawater CO 2 solubility and atmospheric CO 2 concentrations.Where the atmospheric CO 2 seasonal cycle and seasonal cycle of solubility are in phase, an increase in the magnitude of the atmospheric CO 2 seasonal cycle results in an annual average increase in the atmospheric CO 2 concentration in equilibrium with the underlying seawater.Dashed black and red lines represent an idealised atmospheric CO 2 cycle at 1x the normal seasonal cycle, and 2x the normal seasonal cycle respectively.The blue line represents the solubility of CO 2 in the underlying seawater throughout an idealised annual cycle.The solid black and red lines represent the atmospheric CO 2 concentration in equilibrium with seawater (the product of the atmospheric mole fraction and the solubility, assuming a total pressure of 1 atmosphere) for 1x and 2x the atmospheric CO 2 cycle amplitude respectively.The elevation of the solid red line, over the solid black line (CO 2 at 2x and 1x seasonal cycle) in the first half of the year is not cancelled by an equal decrease in the second half of the year, and consequently the annually averaged atmospheric ρCO 2 seen by the ocean is elevated by the increase in the seasonal cycle amplitude.The elevated atmospheric ρCO 2 will drive a relative flux of CO 2 into the ocean.
2018-12-12T11:08:17.539Z
2011-08-16T00:00:00.000
{ "year": 2011, "sha1": "7cdcb4bb9a8d480bcbceced4eb9f1026bed6a216", "oa_license": "CCBY", "oa_url": "https://www.biogeosciences.net/9/2311/2012/bg-9-2311-2012.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "7cdcb4bb9a8d480bcbceced4eb9f1026bed6a216", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geology" ] }
266792712
pes2o/s2orc
v3-fos-license
A Laboratory-Based Multidisciplinary Approach for Effective Education and Training in Industrial Collaborative Robotics : The rapid evolution of robotics across various sectors, including healthcare, manufacturing, and domestic applications, has underscored a significant workforce skills gap. The shortage of qualified professionals in the labor market has had adverse effects on production capacities. Therefore, the significance of education and training for cultivating a skilled workforce cannot be overstated. This research work presents the development of a pedagogical approach centered on laboratory infrastructure designed specifically with multidisciplinary technologies and strategic human–machine interaction protocols to enhance learning in industrial robotics courses. Progressive competencies in laboratory protocols are developed, focusing on programming and simulating real-world industrial robotics tasks, to bridge the gap between theoretical education and practical industrial applications for higher education students. The proposed infrastructure includes a user-configurable maze comprising different colored elements, defining starting points, endpoints, obstacles, and varying track sections. These elements foster a dynamic and unpredictable learning environment. The infrastructure is fabricated using Computer Numerical Control (CNC) machining and 3D printing techniques. A collaborative robot, the Universal Robots UR3e, is used to navigate the maze and solve the track with advanced computer vision and human–machine communication. The amalgamation of practical experience and collaborative robotics furnishes students with hands-on experience, equipping them with the requisite skills for effective programming and manipulation of robotic devices. Empowering human–machine interaction and human–robot collaboration assists in addressing the industry’s demand for skilled labor in operating collaborative robotic manipulators. Introduction In recent years there has been a significant advance in robotics and automation, both in household tasks and in medicine, industry, armed forces, and the list goes on.The main objective goal of this evolution is that there is an adaptation of "the functions of robots with human needs (p.1)" [1].Nowadays, robotics is part of everyday life; however, there is a misconception that the more robotics take over human tasks, the more unemployment will increase.For an uninformed society, this can be an obstacle to the existing deficit in the advances of robotics [1,2].This evolution will have a "positive impact of industrial robots on employment" (p. 1).The International Federation of Robotics (IFR) predicts that about 4 to 6 million jobs will be created due to robotics.However, if we consider the number of jobs that will also be created indirectly, the number rises to 10 million [1,2]. This rapid advancement of robotization and automation of systems has created a high demand for professionals with specialized training in the field of robotics.This demand must be met by the qualification of new professionals, where education and training in Laboratories 2024, 1 35 robotics systems will play a fundamental and pivotal role in the high demand/need for professionals in the area.The American Society for Training and Development (ASTD) reported that organizations in the United States of America spend about USD 164.2 billion on employee training [1,2] Today, there is still a large discrepancy between education and the needs of people, more specifically, in industry.However, if we can bring the two approaches closer together, it will be a plus both for those who seek to take advantage of education, in this case, university higher education, and for those who employ graduates with this background.These employers will feel more comfortable hiring an employee without having to worry about the extra expense and time spent training them to be fit for the job.It is known that specific training will always have to exist due to the constant evolution of science and technology.Still, if we reduce this training, however small the reduction may be, it will be an asset for both the employee and the employer in terms of less time being wasted [2][3][4]. When thinking about educational robotics, it is necessary to consider a multidisciplinary approach, as it touches fields such as mechanics, electronics, programming, automation, and so on.STEM (science, technology, engineering, and mathematics) is an educational method that aims at interdisciplinarity, among other aspects, addressing various areas of study, including mathematics, science, and technology, and offering new benefits in education at all levels.This approach interconnects all the areas of study necessary for a better approach to industrial robotics, adding that industrial robotics stimulates problem-solving skills, communication skills, teamwork skills, independence, and creativity [3,[5][6][7][8]. It is estimated that there has been an 18-25% growth in the implementation of industrial robots.For this growth, the demand for new employees with appropriate training and skills is expected to increase.There will also be a need for many employees with robotics skills, with a great capacity for evolution in technology, and with the ability to give short training to workers [2]. The main objective of this paper lies in the development of a protocol with various degrees of difficulty of laboratory work involving programming and testing in real systems of industrial robotics applications, to provide a greater understanding and a smaller discrepancy between educational robotics and industrial robotics in higher education students. To fulfill the main goal, the following objectives are defined: • Conduct a literature review on the topics under study-robotics, industrial robotics, educational robotics, and STEM; • Project an infrastructure and a teaching support system adaptable to various curricular units; • Develop a laboratory activity and create two laboratory protocols with various degrees of difficulty, based on functions that replicate tasks that are observed in an industrial environment; • Analyze and discuss the results obtained. The essence of collaborative robots is that a human being joins their strengths in collaboration with those of the robot, that is, it is a symbiosis between the robot and the human, as shown in Figure 1.This procedural approach offers numerous advantages as robots are capable of continuous operation without any degradation in performance or precision.In contrast, human operators exhibit distinct characteristics.Over time, human workers experience fatigue, leading to a reduced capacity to sustain focus on tasks.This diminished focus results in decreased precision and efficiency, which, for a company, translates into potential profit losses or increased operational expenses [9,10]. In terms of ergonomics, the human capacity to bear weight is constrained by factors such as body size and load-bearing capabilities.These constraints not only prevent the execution of specific tasks but may also lead to fatigue and potentially debilitating musculoskeletal injuries that are challenging to recover from.In the context of bearing loads, robots exhibit their own set of limitations.However, these limitations are primarily related to the maximum load they can bear.Adapting a robot's load capacity to the specific load it is designed to handle is typically sufficient.In contrast to the physical limitations of humans, one of the advantages of utilizing robots lies in their ability to consistently bear loads without interruptions [9,11].In terms of ergonomics, the human capacity to bear weight is constrained by factors such as body size and load-bearing capabilities.These constraints not only prevent the execution of specific tasks but may also lead to fatigue and potentially debilitating musculoskeletal injuries that are challenging to recover from.In the context of bearing loads, robots exhibit their own set of limitations.However, these limitations are primarily related to the maximum load they can bear.Adapting a robot's load capacity to the specific load it is designed to handle is typically sufficient.In contrast to the physical limitations of humans, one of the advantages of utilizing robots lies in their ability to consistently bear loads without interruptions [9,11]. Nonetheless, collaborative robotics presents a multifaceted landscape characterized by a spectrum of advantages, drawbacks, and inherent challenges.This field remains a subject of ongoing research and advancement, offering significant prospects for further exploration.As delineated in Table 1, collaborative robotics presents notable advantages, particularly within industrial contexts.These advantages encompass economic viability, streamlined programming, reduced human effort, resilience to stress, enhanced safety, ergonomic benefits for human workers, facilitation of human-robot collaboration, and a high degree of versatility.Conversely, certain disadvantages, such as variability in operational efficiency, limitations in speed, and occasional constraints on load-bearing capacity due to reduced robustness, are observed. Regarding challenges, it is imperative to recognize the perpetual potential for advancements in overcoming these challenges.Consequently, the field of collaborative robotics remains in a state of continual development and evolution [10,12].Table 1.Advantages, disadvantages, and challenges of collaborative robotics (adapted) [10,12]. Economic viability Human-robot collaboration High versatility Security Ease of programming Increased ergonomics Division of tasks Adaptability Education plays a pivotal role in the development of a society, as it is an educated workforce that lays the foundation for the advancement of a nation.The attainment of Nonetheless, collaborative robotics presents a multifaceted landscape characterized by a spectrum of advantages, drawbacks, and inherent challenges.This field remains a subject of ongoing research and advancement, offering significant prospects for further exploration.As delineated in Table 1, collaborative robotics presents notable advantages, particularly within industrial contexts.These advantages encompass economic viability, streamlined programming, reduced human effort, resilience to stress, enhanced safety, ergonomic benefits for human workers, facilitation of human-robot collaboration, and a high degree of versatility.Conversely, certain disadvantages, such as variability in operational efficiency, limitations in speed, and occasional constraints on load-bearing capacity due to reduced robustness, are observed.Regarding challenges, it is imperative to recognize the perpetual potential for advancements in overcoming these challenges.Consequently, the field of collaborative robotics remains in a state of continual development and evolution [10,12]. Education plays a pivotal role in the development of a society, as it is an educated workforce that lays the foundation for the advancement of a nation.The attainment of education can be pursued using a diverse array of methodologies.However, within the context of this article, the focus is directed toward higher education.In an academic setting, the educational process is contingent on a multitude of variables, including (1) the prevailing environment, (2) the specific field of study under consideration, and (3) the level of complexity and rigor required, among other determinants.These very factors assume a central role in determining the pedagogical approach to be used.This encompasses decisions regarding the type of classes to be offered (be they practical or theoretical), whether the educational experience takes place in a field or within the institutional infrastructure, and whether the learning paradigm is oriented toward problem-based self-directed study or guided instruction by an educator.The permutations in educational methodologies are manifold, offering a spectrum of possibilities for educational delivery [1,13]. Using robots in education can be a valuable tool due to its multidisciplinary nature.It covers various subjects like physics, biology, geography, math, electronics, and mechanics.Learning in these areas not only imparts knowledge but also enhances skills such as writing, reading, research, teamwork, critical thinking, decision-making, problem-solving, communication, design, and computational thinking [14]. When researching multidisciplinary robotics in education, the acronym STEM (science, technology, engineering, and mathematics) always comes up as it is related to a multidisciplinary approach in education.Some authors argue that interdisciplinarity is very important in education today as this approach helps students to be better prepared for the constant technological evolution [6][7][8]. Typically, advancements in industrial robotics are first adopted within the industrial sector before being introduced into educational environments.This precedence is primarily due to the industry's continuous pursuit of increased efficiency, which provides a competitive edge [1]. Educational institutions aspire to reduce the disparity in skills between academia and industry, yet they encounter impediments such as financial constraints, resistance to curriculum updates, and logistical complexities in establishing experiential learning frameworks with industrial collaborators.These hurdles impede students from acquiring practical insights into the industry and gaining hands-on experience with potential workrelated challenges, thereby impeding their readiness for their careers [15]. An alternative approach is to reform teaching methodologies within educational institutions, a change principally within the purview of these establishments themselves.Such innovation, however, needs to be an ongoing process, given the constant evolution of the robotics field.As robotics technology continues to advance and find applications in new areas, there will be a growing demand for multidisciplinary skills.Thus, the education sector must keep pace with these developments, consistently updating and expanding its curriculum to prepare students adequately for these emerging opportunities [15,16]. Materials and Methods In the pursuit of developing a protocol within the domain of industrial robotics, we established a robust support infrastructure, which included the creation of a dedicated track.The central focus of this approach revolves around crafting engaging add-ons using 3D printing technology and creating a versatile workspace, both of which play pivotal roles in facilitating a wide array of educational activities.The overarching objective is to actively foster heightened engagement and promote a profound acquisition of knowledge among our students.The infrastructure enables the assembly of a track meticulously designed to be constructed by the robot while obeying the following rules: The pathway, defined by various pieces, was designed for the robot's analysis and construction.The green piece marks the starting point, while the red pieces serve as obstacles that should be avoided during construction.The blue piece designates the endpoint.The track's structure includes pieces of varying lengths.After assembly, the robot places a marble at the initial point, which then traverses the entire track to reach the endpoint.The activity is assembled in a base that has a grid of fittings to allow for precise positioning of the parts that compose both the setup and the track assembly.Figure 2 shows a possible configuration to solve a random setup. is assembled in a base that has a grid of fittings to allow for precise positioning of the parts that compose both the setup and the track assembly.Figure 2 shows a possible configuration to solve a random setup.The development of the track involved several concepts and phases.In the initial phase, CAD drawings of the track were created, followed by 3D printing for prototyping and testing. During this initial phase, it was important to test the dimensions of the track, as well as the tolerances for fitting and threading, to better understand which option best met the desired requirements. The base of the track was developed to support all the fittings that make it up while also serving as a storage location for all the pieces and obstacles developed for this purpose.As shown in Figure 3, the developed base has a configuration to facilitate the manipulation of the parts by the robot and holes that allow the storage of all the parts involved.Its configuration is justified by the robot's working area being circular.The chosen material for the base of the track was medium-density fiberboard (MDF).It was machined using a Pronun CNC router.The following images depict the machining process: Figure 4a is the Pronun CNC router; Figure 4b shows the wooden board (MDF) The development of the track involved several concepts and phases.In the initial phase, CAD drawings of the track were created, followed by 3D printing for prototyping and testing. During this initial phase, it was important to test the dimensions of the track, as well as the tolerances for fitting and threading, to better understand which option best met the desired requirements. The base of the track was developed to support all the fittings that make it up while also serving as a storage location for all the pieces and obstacles developed for this purpose.As shown in Figure 3, the developed base has a configuration to facilitate the manipulation of the parts by the robot and holes that allow the storage of all the parts involved.Its configuration is justified by the robot's working area being circular. Laboratories 2024, 1, FOR PEER REVIEW 5 is assembled in a base that has a grid of fittings to allow for precise positioning of the parts that compose both the setup and the track assembly.Figure 2 shows a possible configuration to solve a random setup.The development of the track involved several concepts and phases.In the initial phase, CAD drawings of the track were created, followed by 3D printing for prototyping and testing. During this initial phase, it was important to test the dimensions of the track, as well as the tolerances for fitting and threading, to better understand which option best met the desired requirements. The base of the track was developed to support all the fittings that make it up while also serving as a storage location for all the pieces and obstacles developed for this purpose.As shown in Figure 3, the developed base has a configuration to facilitate the manipulation of the parts by the robot and holes that allow the storage of all the parts involved.Its configuration is justified by the robot's working area being circular.The chosen material for the base of the track was medium-density fiberboard (MDF).It was machined using a Pronun CNC router.The following images depict the machining process: Figure 4a is the Pronun CNC router; Figure 4b shows the wooden board (MDF) The chosen material for the base of the track was medium-density fiberboard (MDF).It was machined using a Pronun CNC router.The following images depict the machining process: Figure 4a is the Pronun CNC router; Figure 4b shows the wooden board (MDF) 650 mm × 650 mm × 18 mm; Figure 4c shows the drilling machine with a 5 mm diameter; Figure 4d shows the finishing drilling machine with a 13 mm diameter; and Figure 4e shows the milling of the outer contour. 650 mm × 650 mm × 18 mm; Figure 4c shows the drilling machine with a 5 mm diameter; Figure 4d shows the finishing drilling machine with a 13 mm diameter; and Figure 4e shows the milling of the outer contour.Next, the wooden board underwent a surface treatment process to provide greater durability, as shown in Figure 5a.The following images illustrate the surface treatment process after machining: Figure 5a shows the raw wooden board; Figure 5b shows the sanding of the wooden board; Figure 5c shows the application of pore filler on the board; Figure 5d shows the application of paint on the board; and Figure 5e shows the finishing process of the application of matte varnish on the board. Finally, pins were created, as shown in Figure 6a, to assist with the positioning of the track.These pins were inserted into the holes in the base, as shown in Figure 6b.Several tolerance tests were conducted for the pins, ranging from 12 mm to 13 mm in diameter.The diameter of 12.2 mm proved to be the best fit, as it had a tight fit and required some force to remove, which was perfect for its intended purpose. One of the final elements created to complete the track was the marble holder support, as shown in Figure 7.It was designed to be fixed in place, with the lower peg allowing the robot to easily retrieve the marble.The detail of this component is that the marble rests on two elements that ensure it remains centered on the piece, facilitating its positioning.Next, the wooden board underwent a surface treatment process to provide greater durability, as shown in Figure 5a.The following images illustrate the surface treatment process after machining: Figure 5a shows the raw wooden board; Figure 5b shows the sanding of the wooden board; Figure 5c shows the application of pore filler on the board; Figure 5d shows the application of paint on the board; and Figure 5e shows the finishing process of the application of matte varnish on the board. Finally, pins were created, as shown in Figure 6a, to assist with the positioning of the track.These pins were inserted into the holes in the base, as shown in Figure 6b.Several tolerance tests were conducted for the pins, ranging from 12 mm to 13 mm in diameter.The diameter of 12.2 mm proved to be the best fit, as it had a tight fit and required some force to remove, which was perfect for its intended purpose. One of the final elements created to complete the track was the marble holder support, as shown in Figure 7.It was designed to be fixed in place, with the lower peg allowing the robot to easily retrieve the marble.The detail of this component is that the marble rests on two elements that ensure it remains centered on the piece, facilitating its positioning. All the pieces have the same height about the z-axis, which facilitates the assembly.Therefore, they are truly differentiated by the length of the track sections.There are 5 different track sections ranging in length from 78 to 263 mm.However, what matters in terms of overall length is the distance between their fittings, as shown in Figure 8, which ranges from 50 to 250 mm with increases of 50 mm in each piece to match the space between the fittings in the base. In Figure 9, an example of a track assembly of this version with all the elements can be seen.However, there were some issues, such as the fittings not all being the same and the marble not being able to pass through the heights, which would have been advantageous in terms of versatility.All the pieces have the same height about the z-axis, which facilitates the assembly.Therefore, they are truly differentiated by the length of the track sections.There are 5 different track sections ranging in length from 78 to 263 mm.However, what matters in terms of overall length is the distance between their fittings, as shown in Figure 8, which ranges from 50 to 250 mm with increases of 50 mm in each piece to match the space between the fittings in the base.In Figure 9, an example of a track assembly of this version with all the elements can be seen.However, there were some issues, such as the fittings not all being the same and the marble not being able to pass through the heights, which would have been advantageous in terms of versatility.The column section has an interior hole, as shown in Figure 10a, that allows the marble to transition not only from one track section to another but also from a track section to a column section and then to another track section, as depicted in Figure 10b.ranges from 50 to 250 mm with increases of 50 mm in each piece to match the space between the fittings in the base.In Figure 9, an example of a track assembly of this version with all the elements can be seen.However, there were some issues, such as the fittings not all being the same and the marble not being able to pass through the heights, which would have been advantageous in terms of versatility.The column section has an interior hole, as shown in Figure 10a, that allows the marble to transition not only from one track section to another but also from a track section to a column section and then to another track section, as depicted in Figure 10b.The column section has an interior hole, as shown in Figure 10a, that allows the marble to transition not only from one track section to another but also from a track section to a column section and then to another track section, as depicted in Figure 10b. ranges from 50 to 250 mm with increases of 50 mm in each piece to match the space between the fittings in the base.In Figure 9, an example of a track assembly of this version with all the elements can be seen.However, there were some issues, such as the fittings not all being the same and the marble not being able to pass through the heights, which would have been advantageous in terms of versatility.The column section has an interior hole, as shown in Figure 10a, that allows the marble to transition not only from one track section to another but also from a track section to a column section and then to another track section, as depicted in Figure 10b.It can be observed that the track section fitting shown in Figure 11a enhances the stability of the track.The bottom fittings of this version are all the same, which allows for greater versatility and avoids any stability problems during assembly.The inner tunnel of each track section, as shown in Figure 11b, begins and ends tangentially to the surface of the component, allowing for smooth movement of the marble. It can be observed that the track section fitting shown in Figure 11a enhances the stability of the track.The bottom fittings of this version are all the same, which allows for greater versatility and avoids any stability problems during assembly.The inner tunnel of each track section, as shown in Figure 11b, begins and ends tangentially to the surface of the component, allowing for smooth movement of the marble.The final piece, as shown in Figure 12a, was designed so that the marble, upon reaching the end of the track, stops in the transport piece, as shown in Figure 12b.The transport piece has a round concavity in its center to minimize the movement of the marble during transportation.It also has lateral tabs that serve as guides when it meets the final component, as depicted in Figure 12c,d The final piece, as shown in Figure 12a, was designed so that the marble, upon reaching the end of the track, stops in the transport piece, as shown in Figure 12b.The transport piece has a round concavity in its center to minimize the movement of the marble during transportation.It also has lateral tabs that serve as guides when it meets the final component, as depicted in Figure 12c,d. It can be observed that the track section fitting shown in Figure 11a enhances the stability of the track.The bottom fittings of this version are all the same, which allows for greater versatility and avoids any stability problems during assembly.The inner tunnel of each track section, as shown in Figure 11b, begins and ends tangentially to the surface of the component, allowing for smooth movement of the marble.The final piece, as shown in Figure 12a, was designed so that the marble, upon reaching the end of the track, stops in the transport piece, as shown in Figure 12b.The transport piece has a round concavity in its center to minimize the movement of the marble during transportation.It also has lateral tabs that serve as guides when it meets the final component, as depicted in Figure 12c,d The remaining sections of the track, namely, section 2, shown in Figure 13a, section 3, shown in Figure 13b, section 4, shown in Figure 13c, and section 5, shown in Figure 13d, are identical to Figure 13a.The only difference lies in the length between the fittings at the bottom, which varies between the intervals of 50 mm, 100 mm, 150 mm, 200 mm, and 250 mm, respectively.mm, respectively. The fittings were modified because they were too small and did not provide sufficient stability.To improve the stability of the track and to avoid the problem of the track falling out during its assembly, the fittings were increased from approximately 2 mm to 7.5 mm, as shown in Figure 14.This increase allowed for a larger contact area, ensuring greater stability.The fittings were modified because they were too small and did not provide sufficient stability.To improve the stability of the track and to avoid the problem of the track falling out during its assembly, the fittings were increased from approximately 2 mm to 7.5 mm, as shown in Figure 14.This increase allowed for a larger contact area, ensuring greater stability. The remaining sections of the track, namely, section 2, shown in Figure 13a, section 3, shown in Figure 13b, section 4, shown in Figure 13c, and section 5, shown in Figure 13d, are identical to Figure 13a.The only difference lies in the length between the fittings at the bottom, which varies between the intervals of 50 mm, 100 mm, 150 mm, 200 mm, and 250 mm, respectively. The fittings were modified because they were too small and did not provide sufficient stability.To improve the stability of the track and to avoid the problem of the track falling out during its assembly, the fittings were increased from approximately 2 mm to 7.5 mm, as shown in Figure 14.This increase allowed for a larger contact area, ensuring greater stability.With these components presented, it was possible to assemble the track, as shown in Figure 15.This version was tested and deemed to meet the desired requirements. Laboratories 2024, 1, FOR PEER REVIEW 11 With these components presented, it was possible to assemble the track, as shown in Figure 15.This version was tested and deemed to meet the desired requirements. Results In this section, the procedures and phases entailed in accomplishing the task are explained. Infrastructure-Analysis of Stages The primary objective of this work is the establishment of infrastructure bridging industrial robotics and robotics in education.This infrastructure, as elaborated earlier, underwent multiple iterations to achieve a versatile version suitable for students of varying levels of complexity.The ultimate iteration is capable of not only constructing the shortest path while circumventing obstacles but also offers adaptability for navigating obstacles along the shortest route. Furthermore, the foundational structure designed to facilitate the assembly and storage of track components can be readily customized with new elements and configurations to accommodate diverse activities.For instance, it can be used for simulating assembly line operations and assessing defects in specific components in terms of their shape, finish (e.g., painting, coating), or welding processes.The potential avenues for advancing this work are limitless. During this project, a teaching support system was created, which could be the starting point for initiatives in industrial robotics courses or even for more dynamic and interactive demonstrations for students.With some modifications and adaptations, this support material can also be used for other disciplines in the field of industrial automation, which would be an interesting combination from an industrial perspective, as many industries integrate automation with robotics. Laboratory Protocols The two developed protocols are attached.They will differ in terms of their level of difficulty, with the second protocol being the most complex. The first protocol has four activities designed for students to familiarize themselves with the UR3e robot.This protocol involves assembling the track using the robot pendant and guiding the marble through the track.The track built with five iterations, which is the most complex type of path that UR3e can build in this framework. Results In this section, the procedures and phases entailed in accomplishing the task are explained. Infrastructure-Analysis of Stages The primary objective of this work is the establishment of infrastructure bridging industrial robotics and robotics in education.This infrastructure, as elaborated earlier, underwent multiple iterations to achieve a versatile version suitable for students of varying levels of complexity.The ultimate iteration is capable of not only constructing the shortest path while circumventing obstacles but also offers adaptability for navigating obstacles along the shortest route. Furthermore, the foundational structure designed to facilitate the assembly and storage of track components can be readily customized with new elements and configurations to accommodate diverse activities.For instance, it can be used for simulating assembly line operations and assessing defects in specific components in terms of their shape, finish (e.g., painting, coating), or welding processes.The potential avenues for advancing this work are limitless. During this project, a teaching support system was created, which could be the starting point for initiatives in industrial robotics courses or even for more dynamic and interactive demonstrations for students.With some modifications and adaptations, this support material can also be used for other disciplines in the field of industrial automation, which would be an interesting combination from an industrial perspective, as many industries integrate automation with robotics. Laboratory Protocols The two developed protocols are attached.They will differ in terms of their level of difficulty, with the second protocol being the most complex. The first protocol has four activities designed for students to familiarize themselves with the UR3e robot.This protocol involves assembling the track using the robot pendant and guiding the marble through the track. The first activity involves arranging the pieces-the initial piece (green piece), final piece (blue piece), and red pieces-in a pre-determined position, and the robot must pick them up and place them in the designated location according to the protocol. In the second activity, the pieces are placed on a conveyor belt in a known sequence, and the robot must retrieve them from the conveyor belt and assemble them in the predetermined location. In the third activity, the robot must retrieve the pieces from the conveyor belt, where they are randomly placed, and differentiate them by color using the wrist camera. The last activity is the building of a track for a predefined and manually assembled challenge.The various components forming the maze are assembled with the manipulator, which needs to be programmed specifically for this manipulation task, demanding heightened complexity due to the precision and orientation requirements of the pieces. The second protocol, deep in advanced robot programming, provides the student with an approach to research activities.Python implementation is essential to derive an algorithm capable of planning the most efficient path. Figure 16 displays the layout sequence followed in both protocols.In the first protocol, students create a fixed challenge, meaning the programmed solution only works with that specific challenge layout.However, the second protocol allows for flexible challenges.In this protocol, changes to the challenge layout are accommodated in each attempt, enabled by the integration of a shortest-path planning algorithm. Laboratories 2024, 1, FOR PEER REVIEW 12 The first activity involves arranging the pieces-the initial piece (green piece), final piece (blue piece), and red pieces-in a pre-determined position, and the robot must pick them up and place them in the designated location according to the protocol. In the second activity, the pieces are placed on a conveyor belt in a known sequence, and the robot must retrieve them from the conveyor belt and assemble them in the predetermined location. In the third activity, the robot must retrieve the pieces from the conveyor belt, where they are randomly placed, and differentiate them by color using the wrist camera. The last activity is the building of a track for a predefined and manually assembled challenge.The various components forming the maze are assembled with the manipulator, which needs to be programmed specifically for this manipulation task, demanding heightened complexity due to the precision and orientation requirements of the pieces. The second protocol, deep in advanced robot programming, provides the student with an approach to research activities.Python implementation is essential to derive an algorithm capable of planning the most efficient path. Figure 16 displays the layout sequence followed in both protocols.In the first protocol, students create a fixed challenge, meaning the programmed solution only works with that specific challenge layout.However, the second protocol allows for flexible challenges.In this protocol, changes to the challenge layout are accommodated in each attempt, enabled by the integration of a shortest-path planning algorithm.When observing the assembly of a track with only one straight piece, it is evident that the fastest path is a straight line, which facilitates the assembly process.However, there can be multiple paths depending on the arrangement of the obstacles and the start and finish pieces.By analyzing Figure 18a, the fastest path between the initial and final points can be determined, as shown in Figure 18b.For this assembly, only one straight piece is required, as shown in Figure 18c.The robot then places the marble at the starting point (green piece) to follow the shortest path through the track to the endpoint (blue piece), as depicted in Figure 18d.When observing the assembly of a track with only one straight piece, it is evident that the fastest path is a straight line, which facilitates the assembly process.However, there can be multiple paths depending on the arrangement of the obstacles and the start and finish pieces.By analyzing Figure 18a, the fastest path between the initial and final points can be determined, as shown in Figure 18b.For this assembly, only one straight piece is required, as shown in Figure 18c.The robot then places the marble at the starting point (green piece) to follow the shortest path through the track to the endpoint (blue piece), as depicted in Figure 18d.When observing the assembly of a track with only one straight piece, it is evident that the fastest path is a straight line, which facilitates the assembly process.However, there can be multiple paths depending on the arrangement of the obstacles and the start and finish pieces.By analyzing Figure 18a, the fastest path between the initial and final points can be determined, as shown in Figure 18b.For this assembly, only one straight piece is required, as shown in Figure 18c.The robot then places the marble at the starting point (green piece) to follow the shortest path through the track to the endpoint (blue piece), as depicted in Figure 18d.In a track with two straight pieces and a curve, the situation can be different because the arrangement of obstacles can lead to two possibilities of a shorter path.In the example shown in Figure 19a, it is easy to visualize the shortest path, as shown in Figure 19b.For the assembly of this version, four pieces are required: two heights and two straight pieces, divided into three levels of height.In level 1, a height piece is placed, as shown in Figure 19c.In level 2, the final straight piece and a height piece are placed, as depicted in Figure 19d.Finally, in the last level, level 3, the initial straight piece is placed, as shown in Figure 19e.Then, the robot places the marble at the starting point (green piece) to follow the shortest path through the track to the endpoint (blue piece), as shown in Figure 19f. the assembly of this version, four pieces are required: two heights and two straight pieces, divided into three levels of height.In level 1, a height piece is placed, as shown in Figure 19c.In level 2, the final straight piece and a height piece are placed, as depicted in Figure 19d.Finally, in the last level, level 3, the initial straight piece is placed, as shown in Figure 19e.Then, the robot places the marble at the starting (green piece) to follow the shortest path through the track to the endpoint (blue piece), as shown in Figure 19f.In a path with three tracks and two changes in direction, depending on the arrangement of the obstacles, there can be two possible paths.The following example, Figure 20a, illustrates the shortest path, as shown in Figure 20b.For the assembly of this version, eight In a path with three tracks and two changes in direction, depending on the arrangement of the obstacles, there can be two possible paths.The following example, Figure 20a, illustrates the shortest path, as shown in Figure 20b.For the assembly of this version, eight pieces including five heights and three straight sections are required, divided into 4 levels.In level 1, two heights are placed, as shown in Figure 20c.In level 2, the final straight section, the intermediate height, and the initial height are placed, as shown in Figure 20d.Then, in level 3, the intermediate straight section and an additional height in the initial position are added, as shown in Figure 20e.In the final level, level 4, the robot places the initial straight section, as shown in Figure 20f.Finally, the robot places the marble to traverse the track from the initial point (green piece) to the final point (blue piece) following the shortest path, as depicted in Figure 20g. section, the intermediate height, and the initial height are placed, as shown in Figure 20d.Then, in level 3, the intermediate straight section and an additional height in the initial position are added, as shown in Figure 20e.In the final level, level 4, the robot places the initial straight section, as shown in Figure 20f.Finally, the robot places the marble to traverse the track from the initial point (green piece) to the final point (blue piece) following the shortest path, as depicted in Figure 20g.To establish the practical relevance of the proposed activities within industrial contexts, a panel of field specialists analyzed the suggested protocols.During interactive sessions and practical demonstrations, these experts supported the significance of the pick-and-place tasks, highlighting their adaptability to real-world operational scenarios encountered in the industry.They pointed out that, while the activities are designed with a generalist approach, they serve as an essential foundation for implementing specific task modifications in diverse industrial applications.Additionally, some interest was expressed in incorporating these protocols into their training modules, acknowledging the efficacy of the multi-disciplinary methodology proposed for educational and training programs in the domains of automation and robotics. Discussion Currently, industries face rising pressure to meet consumer demands efficiently and economically.To stay competitive, industries are investing heavily in streamlining production processes to support increasing raw material costs. This evolution in industries demands a parallel shift in educational paradigms.Educational institutions must renew their approaches, exemplifying theoretical knowledge using more practical and advanced teaching methods.This ensures better-prepared students for the dynamic challenges awaiting them in the professional sphere upon course completion. The main objective was fully achieved, which was the creation of a support system for education, linking industrial robotics and robotics in education.With the creation of this work, it will be possible for students to participate in didactic learning in a way that is close to the reality of the job market.It allows students to finish their training with minimum knowledge so that when they enter the job market, they are prepared for the problems that may arise.Python's inclusion in the last activities of Protocol 1 and Protocol 2 is primarily due to its widespread use as a requirement in advanced robotics, especially when integrated with computer vision.Throughout the completion of this work, and particularly in the development of the track, there were several iterations.The result was optimized in terms of versatility and ease of use.This version was able to solve the problem of height differences, allowing the track to be assembled without any level differences between the start and end of the straight section.Another problem solved was the absence of curved pieces, which was achieved based on the direction of the track section.For the assembly of the track by the robot, a program was developed where, in the initial phase, all necessary positions were defined and added to the position libraries.Subsequently, this same program analyses the picture taken with the robot arm's camera, detecting the shortest path, upon which the robot assembles the track accordingly.This entire process is determined by the laboratory activity protocols, which present various levels of difficulty.The first protocol involves a series of progressively challenging activities.It begins with simpler tasks focused on assembling parts for the track using the robot's pendant.Subsequently, the second and third activities introduce increased complexity with the integration of sensors and vision, respectively.These activities aim to familiarize students with fundamental robotics concepts and enhance their comfort in operating a robot's pendant.However, to complete all tasks, the utilization of the robot's learning console and an external Python program becomes necessary. The fourth activity aims to solely use the pendant, consolidating the concepts learned in the preceding activities.This phase encompasses the entire process executed using the pendant interface, thus emphasizing and reinforcing comprehension of previously acquired knowledge. Conclusions This research highlights the relevant role of a pedagogical approach centered on the development of a specialized infrastructure tailored for collaborative robotics, humanmachine interaction protocols, and advanced robotics programming.The establishment of these protocols not only enriches the learning experience within the robotics curriculum but also fosters a platform for future research endeavors and project engagement within the existing laboratories.By emphasizing real-world robotic task programming and simulation, this approach enhances students' skills in programming industrial robotics, thereby fostering their preparedness for the dynamic demands of the industry.As this work progresses, new concepts and themes arise, leading to new questions and ideas for applying the main objective of this work, as well as the development of additional proto-cols for other applications of robotics in the industry, such as welding, defective material separation, etc.Additionally, the improvement and optimization of the track assembly program, optimizing its size and attempting to make the assembly process faster, will be developed in future work. Figure 2 . Figure 2.An example of a track constructed with the UR3e.The colored pieces define the path, along with the track made up of black pieces. Figure 3 . Figure 3.The track base and storage are designed to be adapted to the UR3e framework. Figure 2 . Figure 2.An example of a track constructed with the UR3e.The colored pieces define the path, along with the track made up of black pieces. Figure 2 . Figure 2.An example of a track constructed with the UR3e.The colored pieces define the path, along with the track made up of black pieces. Figure 3 . Figure 3.The track base and storage are designed to be adapted to the UR3e framework. Figure 3 . Figure 3.The track base and storage are designed to be adapted to the UR3e framework. Figure 4 . Figure 4.The manufacturing process of the wooden board: (a) Pronun CNC router; (b) wooden board (MDF), 650 mm × 650 mm × 18 mm; (c) drilling with a 5 mm diameter; (d) drilling with a 13 mm diameter; and (e) milling of the outer contour. Figure 4 . Figure 4.The manufacturing process of the wooden board: (a) Pronun CNC router; (b) wooden board (MDF), 650 mm × 650 mm × 18 mm; (c) drilling with a 5 mm diameter; (d) drilling with a 13 mm diameter; and (e) milling of the outer contour. Figure 5 .Figure 6 . Figure 5. Treatment process: (a) raw wooden board; (b) sanding the wooden board; (c) application of pore filler on the board; (d) application of paint on the board; and (e) application of matte varnish on the board. Figure 7 . Figure 7. Marble support used in the track. Figure 5 .Figure 5 .Figure 6 . Figure 5. Treatment process: (a) raw wooden board; (b) sanding the wooden board; (c) application of pore filler on the board; (d) application of paint on the board; and (e) application of matte varnish on the board. Figure 7 . Figure 7. Marble support used in the track. Figure 7 . Figure 7. Marble support used in the track.Figure 7. Marble support used in the track. Figure 7 . Figure 7. Marble support used in the track.Figure 7. Marble support used in the track. Figure 9 . Figure 9. CAD model for a track assembly within the developed framework. Figure 9 . Figure 9. CAD model for a track assembly within the developed framework. Figure 9 . Figure 9. CAD model for a track assembly within the developed framework. Figure 9 . Figure 9. CAD model for a track assembly within the developed framework. Figure 10 . Figure 10.(a) Height tunnel-cross-sectional view.(b) Straight to height to straight passage.Figure 10.(a) Height tunnel-cross-sectional view.(b) Straight to height to straight passage. Figure 10 . Figure 10.(a) Height tunnel-cross-sectional view.(b) Straight to height to straight passage.Figure 10.(a) Height tunnel-cross-sectional view.(b) Straight to height to straight passage. Figure 12 . Figure 12.(a) Transport component.(b) Final component.(c) Fitting of the final component and the transport piece.(d) Top view of the fitting of the final component and the transport piece. Figure 12 . Figure 12.(a) Transport component.(b) Final component.(c) Fitting of the final component and the transport piece.(d) Top view of the fitting of the final component and the transport piece. Figure 12 . Figure 12.(a) Transport component.(b) Final component.(c) Fitting of the final component and the transport piece.(d) Top view of the fitting of the final component and the transport piece. Figure 14 . Figure 14.Cross-sectional view of the fittings. Figure 14 . Figure 14.Cross-sectional view of the fittings.Figure 14.Cross-sectional view of the fittings. Figure 14 . Figure 14.Cross-sectional view of the fittings.Figure 14.Cross-sectional view of the fittings. Figure 15 . Figure15.The track built with five iterations, which is the most complex type of path that UR3e can build in this framework. Figure 15 . Figure15.The track built with five iterations, which is the most complex type of path that UR3e can build in this framework. Figure 16 . Figure 16.Comprehensive diagram illustrating the protocols developed. Figure 17 Figure17shows a simplified diagram of the program logic.The track can have three possible configurations: a track with one straight piece, a track with two straight pieces and a curve, or a track with three straight pieces and two curves.The arrangement of the pieces in the photograph determines the type of track to be assembled. Figure 16 . Figure 16.Comprehensive diagram illustrating the protocols developed. Figure 17 Figure17shows a simplified diagram of the program logic.The track can have three possible configurations: a track with one straight piece, a track with two straight pieces and a curve, or a track with three straight pieces and two curves.The arrangement of the pieces in the photograph determines the type of track to be assembled.When observing the assembly of a track with only one straight piece, it is evident that the fastest path is a straight line, which facilitates the assembly process.However, there can be multiple paths depending on the arrangement of the obstacles and the start and finish pieces.By analyzing Figure18a, the fastest path between the initial and final points can be determined, as shown in Figure18b.For this assembly, only one straight piece is required, as shown in Figure18c.The robot then places the marble at the starting point (green piece) to follow the shortest path through the track to the endpoint (blue piece), as depicted in Figure18d. Figure 17 . Figure 17.Program flowchart that enables track analysis and assembly. Figure 18 . Figure 18.Movement sequence of Protocol 2-track with a single straight.(a) Example of piece arrangement.(b) Shortest path.(c) Assembly level 1.(d) Marble placement. Figure 17 . Figure 17.Program flowchart that enables track analysis and assembly. Figure 17 . Figure 17.Program flowchart that enables track analysis and assembly. Figure 18 . Figure 18.Movement sequence of Protocol 2-track with a single straight.(a) Example of piece arrangement.(b) Shortest path.(c) Assembly level 1.(d) Marble placement. Figure 18 . Figure 18.Movement sequence of Protocol 2-track with a single straight.(a) Example of piece arrangement.(b) Shortest path.(c) Assembly level 1.(d) Marble placement. Figure 19 . Figure 19.Movement sequence of Protocol 2-track with two straight sections and one curve.(a) Setup example.(b) Shortest path.(c) Assembly level 1.(d) Assembly level 2. (e) Assembly level 3. (f) Placing the marble to travel. Figure 19 . Figure 19.Movement sequence of Protocol 2-track with two straight sections and one curve.(a) Setup example.(b) Shortest path.(c) Assembly level 1.(d) Assembly level 2. (e) Assembly level 3. (f) Placing the marble to travel. Figure 20 . Figure 20.Movement sequence of Protocol 2-track with three straight sections and two curves.(a) Example of piece arrangement.(b) Shortest path.(c) Assembly level 1.(d) Assembly level 2. (e) Assembly level 3. (f) Assembly level 4. (g) Placing the marble to traverse the track. Figure 20 . Figure 20.Movement sequence of Protocol 2-track with three straight sections and two curves.(a) Example of piece arrangement.(b) Shortest path.(c) Assembly level 1.(d) Assembly level 2. (e) Assembly level 3. (f) Assembly level 4. (g) Placing the marble to traverse the track.
2024-01-07T16:20:29.070Z
2024-01-05T00:00:00.000
{ "year": 2024, "sha1": "032487054781fad7cf979e664c062f8020197642", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2813-8856/1/1/2/pdf?version=1704434509", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "d9d1c8e32b2781fc9c00332f5162782b0d500bbc", "s2fieldsofstudy": [ "Engineering", "Education", "Computer Science" ], "extfieldsofstudy": [] }
220605447
pes2o/s2orc
v3-fos-license
Two different experiments on teaching how to program with active learning methodologies: a critical analysis To combat the difficulty that many students must learn how to program, the failure in introductory programming courses and the traditional high dropout rate, teachers have to use strategies that motivate and improve students’ skills. Active methodologies and student-centered instruction can be a solution to get students interested on the subject, preparing assignments while learning in the classroom. This article reports on two very different experiences in two academic years. In the first year, agile SCRUM methodology, groups of five students, three interactions and a final project were used. In the second year, the Project based Learning was used with groups of three students for two different products, changing the composition of the groups. In both cases, peer classification was used. The results show that in the first case there is an increase in the approval rate, while in the second case there is an increase in the dropout rate. In this article we make a critical analysis of the results, analyzing what can be beneficial in one experiment and in the other in order to find an ideal model for using active methodologies to teach freshman computer science students how to program. INTRODUCTION Teaching programming to students who do never did it before is not an easy task. Some say that programming is very difficult [1] [2] while for others it may be easy [3]. The truth is that traditionally introductory programming courses have high dropout and failure rates [4]. A teacher has a moral and professional obligation to develop strategies to combat this problem. Active teaching methodology has been widely embraced for this purpose [5] [6], involving students in doing things and thinking about the things they are doing [7]. It is a process of having students engage in some activity that forces them to reflect upon ideas and how they are using those ideas, keeping students mentally (and often physically) active while learning, through activities that involve gathering information, thinking and problem solving [8]. Active learning presupposes the concept of Student-Centered Instruction (SCI), often defined by contrast with traditional instructional approaches characterized by greater teacher direction [9]. SCI is an instructional approach in which students influence the content, activities, materials, and pace of learning. This learning model places the student (learner) in the center of the learning process. The instructor provides students with opportunities to learn independently and from one another and coaches them in the skills they need to do so effectively [8]. It is proven to be a good way to motivate students and make them work on the subjects that are taught in the classroom, attracting and engaging them. Herzberg's Two-Factor Theory [10] differentiates motivator factors from hygiene factors: intrinsic and extrinsic to the job respectively. The Three-Motivator Theory [11] indicates that course success can be improved by removing demotivators, increasing intrinsic motivators or increasing extrinsic motivators, or any combination of these three, as appropriate in the course context. There are several studies that prove the advantages and motivation created in students with the use of active methodologies as opposed to passive learning in large classes, such as in Technology-Enabled Active Learning (TEAL) [12]. In programming courses, instructional activities encourage students to learn how to program by doing programming and thinking about what they are doing. A variety of approaches fit beneath the umbrella of student-centered learning, including casebased learning, goal based scenarios, learning by design, projectbased learning, and problem-based learning [9]. Project-based learning is a comprehensive approach to classroom teaching and learning that is designed to engage students in investigation of authentic problems. Students are responsible for both the questions and the answers to such problems [13]. This form of learning allows students to experiment and improve their skills by developing medium projects in contrast to small programs as is usual in courses of this type. With this methodology the students gain independence and self-study traits, accompanying the subjects taught in classes with aspects related to real life. SCRUM is a form of agile approach to software development [14]: a framework within which people can address complex adaptive problems, while productively and creatively delivering products of the highest possible value [15]. The teams (scrum team) are usually made of up to 6 to 10 people, 2020 15th Iberian Conference on Information Systems and Technologies (CISTI) 24 -27 June 2020, Seville, Spain ISBN: 978-989-54659-0-3 one of which is the product owner and another the scrum master. The product backlog is divided into n sprints, each containing a sprint backlog defined at the beginning of each sprint in a meeting (sprint planning meeting) where the entire team must be present and where the Product Owner will be the one who prioritizes the tasks to be included in this list [16] [17]. This article reports on two very different experiences in two academic years. On the first year, agile SCRUM methodology, groups of five students, three interactions and a final project were used. On the second year, Project Based Learning was used with groups of three students for two different products, changing the composition of the groups. In both experiments, the elements of the group were responsible for the final product, however the teacher was indirectly following the work of the groups. One of the forms of control was the use of peer evaluation done anonymously by each member of the group. Peer assessment using marks, grades, and tests have shown positive formative effects on student achievement and attitudes [18]. Since peer assessment is used in part of the classification of each project, the distribution of students by the groups was made by the teacher. This article is divided into five parts: the course characterization: program, schedule, evaluation, and demographics of participants; characterization of the projects: aims, rules and surveys; the results and the discussion as well as final conclusions. A. Program The course is part of a university degree in Computer Science. It is taught in the first semester of the first year and constitutes the students' first contact with computer thinking and a programming language. In this course of a propaedeutic nature, a student should, among other skills to be achieved, be able to develop and implement computer solutions for problem solving, that is, to learn correctly and effectively how to program. Before elaborating a program, the student must know how to understand the problem, know how to develop strategies for the precise specification of the problem he intends to solve with the machine, establish methods for the detailed and rigorous description of solutions that can be implemented on a computer. The programming language chosen was C. Classroom classes are divided into theoretical and practical laboratory classes, respectively with 2 hours and 4 hours per week. The planned program includes Computer Thinking using Top-Down and Algorithms; Conditional structures; C Programming Language; Cycles; Functions and Procedures; Arrays: one dimension and several; Array ordering and search; Record arrays (1 to 1, 1 to n and n to n); Alphanumeric and frequency of sub-alphanumeric. B. Evaluation The evaluation method of the discipline is based on a continuous evaluation model with four elements of evaluation and attendance requirement above 60%. The tests foresee the use of computers and paper and have an expected duration of 90 minutes with 15 minutes of tolerance. The exam (resource, special) is expected to last 120 minutes with a 15-minute tolerance. B.1 Academic year 1 Grade = Test1 * 20% + Test2 * 25% + Test3 * 30% + Project * 25%. Where Test1 is the test score taken in the fifth week of classes, Test2 is the test score taken in the tenth week of classes, Test3 is the test score taken in the last week of classes of the semester. Project is the grade assigned to the student in the project presented in the last week of classes in the semester. Others = Exam * 75% + Project *25% B.2 Academic year 2 Grade = Test1 * 40% + Test2 * 40% + Project1 * 10% + Project2 * 10%. Where Test1 is the test score taken in the eighth week of classes, Test2 is the test score taken in the last week of classes of the semester. Project1 is the grade given to the student in the project presented in the eighth week of classes and Project2 is the grade assigned to the student in the project presented in the last week of classes in the semester. Others = Exam. C. Demographics of participants C.1 Academic year 1 55 students were enrolled and divided into two practical classes. However, 9 students never attended any theoretical or practical classes. 46 students responded to an initial survey: three female (6.5%) and 43 males (93.5%). The average age was 19.9 years and the most frequent age was 18 years. The maximum age was 33 and the minimum was 18, with 96% of the students being 18, 19 or 20 years old. 30 students had a computer science course in secondary education: 22 attended computer applications B on the 12th year and eight Information and Communication Technologies (ICT) on the 9th year. 28 students replied that they had some programming knowledge, having referred to JavaScript, C, Pascal, and Python. C.2 Academic year 2 52 students were enrolled and divided into two practical classes. However, 12 students never attended any theoretical or practical classes. 37 students responded to an initial survey: five female (14%) and 32 males (86% Design in the 10th, 11th and 12th years. 19 students replied that they had some programming knowledge: having referred to Java, JavaScript, C #, C, Pascal, HTML and CSS, Visual Basic and Python. PROJECTS Group work is a structured way to get students to put into practice what they have been learning in the curricular units. As the work model includes peer group assessment, it was decided not to allow students to form groups, and this task is often done only out of friendship, which would jeopardize a fair assessment of colleagues. A. Phases A.1 Academic year 1 There were three phases of work: the 5th, 10th and 15th weeks, but with one presentation and one final classification. A.2 Academic year 2 There were two phases of work: one that ends in the middle of the semester and another that ends with the end of classes. In the proposed work, groups of three elements are created by the Professor. B. Constitution of working groups B.1 Academic year 1 On the proposed work, groups of five elements are created by the Professor. For the constitution of working groups in the first phase, the attendance form in the course was used and divided into two parts: students who had already attended the class and students who had always been absent. The distribution of students by groups was presented in the third week of the semester. 11 groups with five students were created. The group remains the same from the beginning to the end of the semester. B.2 Academic year 2 For the constitution of working groups in the first phase, the attendance sheet was used: students were ordered according to their number of attendances in the first three weeks of classes. The distribution of students by groups was presented in the fourth week of the semester. The students were given a statement that included the subject taught until half of the semester. 13 groups were created: 11 of the groups with three students and two groups with two students. For the constitution of working groups in the second phase, the student's grade sheet was used in the first test: students were ordered according to their test grade 1. The distribution of students by groups was presented in the tenth week of the semester. The students were given a statement that included the material taught until the end of the semester. 12 groups were created with three students on each group. The constitution of the groups is different on project 1 and on project 2. C. Submissions The product of each phase of work is submitted into MOODLE by the project leader: the initial product consists of a document with the name of the project leader and the explanation of how they intend to solve the problem. The final product consists of a program in C language and a document with an explanation of the program: scheme, algorithm and / or text. Presentation is mandatory. The project statement provided that each group would choose a different type of store simulate the various activities of that business. D. Surveys In each of the phases of the group work, two surveys are answered by each element, where each of the elements evaluates their peers. The grade of each of the phases assigned by the teacher is corrected by the average of the grades assigned by the peers of each of the members of the group, provided that at least two other members of the group answer the surveys. Each student had to answer a survey at the beginning of each phase of the project and another one on the delivery day. The surveys were anonymous but included the number of students. Sevens questions were answered on a scale of 0 to 5, "nothing" to "excellent" respectively. Three questions for each colleague were answered also on the same scale. There was a final open question for comments and suggestions. The questions were: 1. I am enjoying this group work 2. I am enjoying working with this group 3. I feel that I improve my skills in the course because of this group work 4. I feel that I improve my group work skills because of this group work 5. My presence at group meetings (face-to-face, skype ...) 6. My effort put into group work until today 7. Self-assessment from day 1 to today regarding group work [Colleagues] A. Colleague name 8. My colleague's assessment of group work Observations and suggestions. E. Summary of the semester calendar E.1 Academic year 1 The following figure outlines the surveys, documents, and C programs which are to be submitted by students each week, and also indicates the weeks for which the two tests are scheduled: -F1, F2 and F3 are the surveys for the end of each of the three project phases; -Doc1a, Doc1b, Doc2a, Doc2b, Doc3a and Doc3b are the beginning and end documents for each of the three project phases. 15th Iberian Conference on Information Systems and Technologies (CISTI) 24 -27 June 2020, Seville, Spain ISBN: 978-989-54659-0-3 -C1, C2 and C3 are the C programs at the end of the first and second phases of the project; -Test1, Test 2 and Test3 are the tests from the 5 th , 10 th and 15th week. Figure 2 outlines the surveys, documents and C programs which are to be submitted by students each week, and also indicates the weeks for which the two tests are scheduled: -P1a, P1b, P2a and P2b are the surveys for the beginning and end of each of the two project phases; -Doc1a, Doc1b, Doc2a and Doc2b are the beginning and end documents for each of the two project phases. -C1 and C2 are the C programs at the end of the first and second phases of the project; -Test 1 and Test 2 are the tests from the beginning of the 8th week and the end of the 14th week. F. The problem F.1 Academic year 1 The objectives of the work were done by the teacher. Each of the phases has to follow the material taught so far in class. The text of the problem was given with the list of the constitution of the groups. F.2 Academic year 2 The objectives of the work were defined by the teacher, but the text allowed the students to choose some aspects as they wanted and liked. When saying that what was intended was a store management, students could choose their business. The choices in the first project were travel agency, school supplies, DIY store, cinema, disco, pharmacy, ice cream shop, jewelry, restaurant, shoe store, supermarket, technology store and pet store. RESULTS A. Academic year 1 F1 survey was answered by 24 students, F2 by 28 and F3 was answered by 24 students, although the distribution of groups in the first part of the project foresees 55 students. There are no significant differences between the responses for the three project phases. It only appears that the first two questions (enjoying the work and the group) and questions 3 and 4 (improving skills) have a worse average in the first questionnaire. And that the highest averages are the responses relative to the student himself and the worst are relative to his colleagues ( Figure 3). The most frequent answer to all questions in phase 1 was 5, excellent. The most frequent answer to almost all questions in phase 2 was 5, excellent except for the evaluation of colleagues which was 4. The most frequent answer to almost all questions in phase 3 was 5, excellent except for the questions related to enjoying the work and the group, which was 4 ( Figure 4). Only 3 students wrote comments at the end: "Maximum 3 elements per group; everything else was interesting."; "Smaller groups." And "It's preferable that everyone can choose their group colleagues". Initially there were 11 groups of 5 students (i1). There were final presentations for 10: two groups of one student and two groups with two students (Table 1). Of the 33 students who attended any test (test1 + test2+ test3 or appeal exam), 29 passed and 4 failed. Only two of the B. Academic year 2 P1a survey was answered by 34 students, P1b by 35, P2a by 28 and P2b was answered by 20 students. Although the distribution of groups in the first part of the project foresees 37 students and the second part of the project has 36 students, the presentations were made by 36 and 16 students, respectively. The averages dropped in almost all questions, with Pa1> Pa2> Pb1> Pb2. This did not happen only in questions 3, 5 and 6. The best averages of answers were those of questions 5 and 6, with values greater than 4.5 ( Figure 5). The most frequent answer to the questions was 5 (excellent), except for question 1 and 2 in the second phase of project which was 4 ( Figure 6). Only 7 students wrote comments at the end of the Pa2 survey, 3 to say that they liked the work, 4 to show displeasure in relation to the group: "We did a good job, despite having some difficulties. We should have organized ourselves better"; "The group runs well there is a good interaction"; "A very interesting work, which completely helps to better understand the concepts taught and practiced in class". "I enjoyed doing the job and helped me develop programming skills. But I had some complications with the group because we were only two and my colleague had some difficulties in the matter, which meant that I had to do a lot of the work alone.", "I liked the work but I didn't get along with my partner.", "We have no 3rd element", "The work was all done by {} and by me". To the P2b survey, only 4 made final comments, all to refer to their group colleagues: "We divided everything into parts, and everyone did their part. The group worked well.", "I did the work myself without the help of any colleagues.", "None of the group members bothered to come to me and ask about the job. * only worried about it when it was 2 or 3 days before delivery but offered no help." and "It was a job that I worked more in". At the beginning of the first project, there were 37 students and 35 students presented the first project. In the second part of the project there were 36 students at the beginning and 16 students presented the second project. In the latter case, two groups ended with just one student, five of the groups did not show any students. In the Table 3 we see that initially in project 1 there were 11 groups of 3 students and 2 groups of 2 students (i1). There were presentations for the 13 groups, 9 with 3 students and 4 with 2 students (in). In project 2 there were 12 groups of 3 students (i1). There were presentations for the 7 groups, four with 3 students, one with 2 students and two ended with just one student (in). The teacher's assessment (0 to 5) was decreasing in the case of the first project (in which students were grouped by class attendance) (Tg). On the case of the second project, there was no significant difference in relation to the grades attributed by the teacher to each of the groups. Columns C1, C2 and C3 show the evaluation of the pairs in a decreasing way. These Of the 27 students who attended any test (test1 + test2 or appeal exam), 15 passed and 12 failed. Of the 15 that were approved, 14 presented themselves in the two phases of the project (P1 and P2). Of the 12 that failed, only 2 worked on the second project. None of the 25 students who missed the exams submitted and presented the second phase of the project. grade would not be considered supplementary season. In the case of the first academic year the project grade was worth 25% of the final grade. In both academic years, only one student who passed did not make the project. It is concluded that the work helped them to obtain the necessary skills. In the second year, it was found that a large part of the students who scored low in the first test did not do the second project: the reasons may be lack of necessary knowledge, student strategy or dropping out of the course. The answers to the questionnaire questions show that most students really liked the project, almost always classifying each of the items asked as excellent. However, it was found that the average of responses was almost always decreasing as the semester progressed. DISCUSSION Considering only at the approval rates, we find that in academic year 1 there were better results: 88% or 53% (in the first case only those who always appeared and in the second also considering those who never appeared) and 56% or 29% in the second academic year. In second academic year the percentage of evaluation was relatively small (10%+10%) and the work There were many problems with working groups, which is reflected in the evaluations attributed to their peers. These ratings were better in the first part of the project than in the second. These evaluations were not always reflected in what happened within the groups: sometimes there were situations in which the students felt that they had no support from colleagues and evaluated those same colleagues with positive grades. This group work was very beneficial for students who were motivated and intended to succeed in the course, but not for other students who do not work daily or who have difficulties in obtaining the skills necessary to succeed in a course of this type. A group project like the one proposed further widens the gap between students who pass and fail or drop out. CONCLUSION These two experiences were very different: in the case of SCRUM there are several interactions up to the final result and in Project Base Learning there are two final products with little time for interactions. In the first case there was a group of five elements throughout the semester, in the second the group was changed in the middle of the semester, that is, there were two different groups of three elements each which changed midsemester. In both cases it was found that the students who approved the course did the work (only one did not and was successful at the end of the semester, in both academic years). In the second case, there is a large percentage of students who did not do the second project. Probably because the groups were made by using the grades of the first test and the last groups were made up of worse students. There are several lessons to be learned: a final project with several interactions seems to be a better alternative than two small jobs. Several students reported that five students is too large a group, but three seems too small for a job that has medium / high complexity. If the percentage of the formula intended for the job is small (or even non-existent for recourse or special times), it is no longer "motivating" for the worst students to work. In both academic years, better students are even more motivated with an active teaching strategy. The worst students are unable to follow a project (which they consider difficult) and give up, especially if they are grouped by first test grade. The challenge will then be to be able to motivate these students and reduce the dropout rate.
2020-07-16T09:06:47.662Z
2020-06-01T00:00:00.000
{ "year": 2020, "sha1": "f96a8fd46c6aad05b3b143558ebb96fcb0382b43", "oa_license": "CCBY", "oa_url": "https://repositorio.upt.pt/bitstreams/08cf3480-954b-46a7-9dff-c1ffa66df7ec/download", "oa_status": "GREEN", "pdf_src": "IEEE", "pdf_hash": "79fcf04a6b8b44e6424836460b1ad77bbf27ef4e", "s2fieldsofstudy": [ "Education", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
128022870
pes2o/s2orc
v3-fos-license
Mountain destination image held by residents and tourists of chapter one Chapter one is the literature review of tourism, consumer behavior, mountains, mountain tourism, destination image, placeattachment and impacts of tourism development. Tourism destinations are spaces strongly affected by imaginary. When considering destination image as the sum of beliefs, ideas, and impressions that an individual has of a destination (Crompton, 1979), it includes cognitive and affective components (Stepchenkova & Morrison, 2008). The cognitive image component consists of beliefs Mountain destination image held by residents and tourists. Doctoral Dissertation Summary. 178 and knowledge about a destination, primarily focusing on tangible physical attributes (Stabler, 1988; Pike & Ryan, 2004). The affective image component, on the other hand, represents feelings about a destination (Baloglu & Brinberg, 1997; Beerli & Martín, 2004). The attractiveness of mountains is mainly based on their symbolic image. The motivations and expectations of tourists visiting mountains were in the past, and are still today, associated with different perceptions and social connotations of that particular space (Silva et al., 2009). A lot of representations of the mountain have their origin in an imaginary associated to ancient legends, inherited from a past of magic and mythical faiths common to all humanity. The new paradigm in tourism research emphasizes, besides destination image, also the understanding of emotional and symbolic subjective meanings associated with nature places and also the connection of people to them (Williams & Vaske, 2003). Moreover it stresses that natural areas are more than geographical environments with physical characteristics. They are fluid, convertible, dynamic contexts of interaction and memory, and therefore susceptible to distinct forms of place-attachment (Stokowski, 2002). Additionally, these environments are also particularly vulnerable (Hillery et al., 2001; Diedrich & García-Buades, 2009). In nature areas, specifically in mountains which are characterized by fragility of both social and ecological systems, tourism is known to induce a series of social, cultural, ecological and economic changes, most of the time irreversible (Stonich, 1998, 2000; Belsky, 1999). In sum, tourism mountain destinations are places with powerful symbolic features that exert a strong influence on destination image formation, eventually leading to high levels of place attachment. On the other hand, mountains, as natural spaces, are more vulnerable to the effects of tourism development. Abstract of chapter two Chapter two presents the development of a conceptual model resulting in the mountain destination image scale – MDI Scale and incorporated seven categories of image: (1) Mystique/Sacred, (2) Natural/Ecological, (3) Historic-Cultural, (4) Social and Prestige, (5) Sport and Leisure, (6) Facilities and Infrastructures, and (7) Affective. Mountains are perceived as sacred places and source of spiritual renewal (Jokinen & Veijola, 2003), an image reflected in their designation as "cathedrals of the world" (Mathieu, 2006). Moreover, mountains are seen as an ecological, scenic and environmental sanctuary of nature (Veyret, 2001; Krauchi et al., 2000). Also, mountains are genuinely guardians of a historic and cultural heritage that are a part of local people’s identities and with a strong touristic value (Goeldner et al., 2003), making them singular, alternative and prestigious spots (Vengesayi & Mavondo, 2004). The social meaning of mountains combines also mountains and sports in an adventurous way associated with specific equipment, facilities and infrastructures (Nepal & Chipeniuk, 2005). Last, but not least, mountains are regarded as affective places capable of inducing strong feelings and emotions to those who visit them or live there. Abstract of chapter threeof chapter three The methodology is described in chapter three with the description of data analysis, which is mainly quantitative, but also incorporates a qualitative element. The validation of the proposed conceptual model was achieved through four procedures of analysis. Firstly, a descriptive data analysis was undertaken with univariate and bivariate analysis, taking into account statistical indicators. The second procedure was exploratory factor analysis (EFA), aiming at determining the relationship between the observed and latent variables. Thirdly, once defined the variables that represent each factor and the number of factors, a Silva, C. (2012) / European Journal of Tourism Research 5(2), pp. 176-181 179 confirmatory factor analysis (CFA) was applied using full-information maximum likelihood (FIML) estimation procedures available in LISREL (Jöreskog & Sörbom, 1993). In order to assess nomological validity, measures were tested with respect to some other constructs to which destination image is theoretically related (cf. Churchill, 1995), such as place-attachment and impacts of tourism development. In this sense, the analysis and data processing were performed using the programs SPSS and LISREL, in their latest versions. Finally, and in order to assess content analysis, the adjective words suggested by the respondents were first categorized and then analysed within broader categories, which were treated as dimensions of the mountain destination image construct, with frequencies of occurrence revealing the importance of each dimension. Abstract of chapter fourof chapter four The results are presented in chapter four. From both quantitative and qualitative analyses two different scales resulted: the TMDI, a mountain destination image scale held by tourists; and the RMDI, a mountain destination image scale held by residents. Tourists associated mountains with historiccultural, natural/ecological, social and prestige image elements, sports and leisure and an affective image dimensions. On the other hand, residents perceived mountains as mystique/sacred, life and health, historiccultural, and affective places. Also, tourists and residents were shown to establish different emotional bonds with mountain sites. Due to their temporary permanence, tourists tend to be less territorially bound, consequently revealing less place-attachment than residents. One of the features of modern tourism is the fact that tourist trips, particularly those involving longer distances, disrupt the sense of belonging to a specific place. If tourists do not feel any belonging to the place they visit, they may lose the sense and comprehension of the environmental limits of human action. Likewise, tourists do not reveal a strong sensitivity regarding environmental impacts in mountains. Abstract of chapter fiveof chapter five Chapter five presents the conclusions of the study namely: discussion of results, implications, limitations and directions for future researches. Tourism mountain destinations have a particular image and meaning to tourists and residents. Tourism mountain destinations image held by tourists integrate the natural/ecological, social and prestige, historic-cultural, affective, and sport and leisure dimensions. On the other hand, in the perspective of local residents, mountains are regarded as mystique/sacred, life and health, affective and historic-cultural spaces. These results could help assist mountain destinations areas managers in their marketing strategy definition. In fact, effective tourism marketing is impossible without an understanding of consumers’ image, while integrated destination marketing leading to sustainable tourism development should also include residents’ views. There are still some study limitations to be considered. From a theoretical standpoint, despite the extensive literature review, the study might omit and therefore not consider other specific relevant mountain image dimensions. The ideal would be the application of this conceptual model as an image measurement instrument to all mountain destinations, which however would have been out of the scope of this individual PhD project. In any case, the replication of this study and corresponding extension of the model to other mountain destinations (or other destinations with similar characteristics), particularly out of Europe, would be most interesting for a more general validation. Finally, tourism destination image is a dynamic concept because images are not static but change overtime (Gartner & Hunt, 1987; Gallarza et al., 2002). Therefore it Mountain destination image held by residents and tourists. Doctoral Dissertation Summary. 180 would seem desirable to carry out longitudinal studies that deal with the process of the formation and changes in image over longer time periods. Goal and objectives of the dissertation Goal Some tourism destinations have powerful symbolic features that exert a strong influence on destination image formation, such as mountain places. Since the mountain regions have become one of the most attractive tourism destination areas, being the choice of 500 million tourists annually (Thomas, et al., 2006), and their attractiveness is mainly based on their symbolic image, the main goal of this study was to analyse, in a holistic and multi-disciplinary approach, residents' and tourists' images of mountain destinations, and the respective gap. Objectives The aim was to develop the MDI Scale -Mountain Destination Image Scalein order to assess a wide set of tourism mountain destination image parameters. Within the MDI scale, images are related to cognitive and affective factors. The study aimed at understanding, in particular, the differences between local residents and tourists in respect to this mountain image. The study aimed at increasing social, cultural and scientific knowledge regarding mountains and their social representations (held by tourists and residents), and at thereby helping mountain destinations to define better adjusted management and marketing strategies. Methodology The study combined quantitative and qualitative survey techniques. The variables used to assess cognitive destination image in the survey instrument are developed on the basis of an extensive literature review related to destination image and mountain constructs. In total, 103 studies have been reviewed and pre-established scale items are integrated into the developed measurement instrument. The initial scales were adjusted to the reality of tourists and local residents being inquired as well as to the specificity of the mountain destinations being studied. Tourists and residents were asked to rate the mountain place as a tourism destination by a list of 49 attributes on a 5-point Likert-type scale ranging from 1 (offers very little) to 5 (offers very much). The affective dimension of tourism destination image was measured by 9 semantic differential scales based on a literature review with 22 studies. Both scales -Likert and semantic differentialswere also discussed with experts in the field of destination image measurement. Additionally, respondents were asked to answer openended questions and to suggest three adjectives related to their subjective mountain perceptions. This approach helps identify other holistic or unique features associated with the mountain destination. The questionnaire was personally administered to residents and each individual tourist during their stay at the mountain tourist sites -Peaks of Europe (Spain), Alps (France, Austria and Switzerland) and Serra da Estrela (Portugal). The main survey was conducted from March through July of 2009 and 630 valid responses were obtained. Results A extensive literature review focusing on the concept of destination image and social and cultural meanings of mountains overtime, and insights from an empirical study of 315 tourists and 315 residents in European Mountains Destinationsthe Serra da Estrela (Portugal), the Alps (France, Austria and Switzerland) and the Peaks of Europe (Spain) -indicate that this multi-dimension scale incorporates five mountain image dimensions held by tourists: (1) historic-cultural, (2) natural/ecological, (3) social and prestige, (4) sport and leisure, and (5) affective; and three images dimensions held by residents: 1) mystique/sacred, (2) historic-cultural and (3) affective. The content analysis of open-ended questions reinforces these results but additionally reveal "life and health" as a significant dimension of mountain image to residents. The results reveal differences on the mountain destination image held by tourists and residents, suggesting five gaps: (1) natural/ecological, (2) sport and leisure and (3) social and prestige, that is an mountain image dimensions significant only for tourists; and (4) mystique/sacred and (5) life and health, which are the mountain image dimensions significant only for residents. Theoretical conclusions Mountains are cultural, natural, social and physical spaces, which are socially, cognitively and emotionally constructed. Therefore, measuring tourism mountain destinations image implies focusing on tangible mountain attributes and also on mountain intangibles or affective dimensions considering the social and cultural meanings of these spaces overtime. On the other hand, tourists and residents regard the mountain places differently, based on their experience, activities, motivations, values and placeattachment. Practical application of the dissertation The study could contribute to tourism marketing and management practice, allowing tourism mountain destinations to implement effective positioning strategies, to increase market segmentation options, enhance product development and communication strategies, and generally improve marketingmix strategies, particularly concerning the development of an effective mountain destination brand. It is important for mountain destination marketers and managers to understand and analyze different mountain image perspectives and adjust positioning strategies for greater effectiveness, considering both tourists and their host community. Content of the dissertation Abstract of chapter one Chapter one is the literature review of tourism, consumer behavior, mountains, mountain tourism, destination image, placeattachment and impacts of tourism development. Tourism destinations are spaces strongly affected by imaginary. When considering destination image as the sum of beliefs, ideas, and impressions that an individual has of a destination (Crompton, 1979) The attractiveness of mountains is mainly based on their symbolic image. The motivations and expectations of tourists visiting mountains were in the past, and are still today, associated with different perceptions and social connotations of that particular space (Silva et al., 2009). A lot of representations of the mountain have their origin in an imaginary associated to ancient legends, inherited from a past of magic and mythical faiths common to all humanity. The new paradigm in tourism research emphasizes, besides destination image, also the understanding of emotional and symbolic subjective meanings associated with nature places and also the connection of people to them (Williams & Vaske, 2003). Moreover it stresses that natural areas are more than geographical environments with physical characteristics. They are fluid, convertible, dynamic contexts of interaction and memory, and therefore susceptible to distinct forms of place-attachment (Stokowski, 2002). Additionally, these environments are also particularly vulnerable (Hillery et al., 2001;Diedrich & García-Buades, 2009). In nature areas, specifically in mountains which are characterized by fragility of both social and ecological systems, tourism is known to induce a series of social, cultural, ecological and economic changes, most of the time irreversible (Stonich, 1998(Stonich, , 2000Belsky, 1999). In sum, tourism mountain destinations are places with powerful symbolic features that exert a strong influence on destination image formation, eventually leading to high levels of place attachment. On the other hand, mountains, as natural spaces, are more vulnerable to the effects of tourism development. Abstract of chapter two Chapter two presents the development of a conceptual model resulting in the mountain destination image scale -MDI Scale -and incorporated seven categories of image: (1) Mystique/Sacred, (2) Natural/Ecological, (3) Historic-Cultural, (4) Social and Prestige, (5) Sport and Leisure, (6) Facilities and Infrastructures, and (7) Affective. Mountains are perceived as sacred places and source of spiritual renewal (Jokinen & Veijola, 2003), an image reflected in their designation as "cathedrals of the world" (Mathieu, 2006). Moreover, mountains are seen as an ecological, scenic and environmental sanctuary of nature (Veyret, 2001;Krauchi et al., 2000). Also, mountains are genuinely guardians of a historic and cultural heritage that are a part of local people's identities and with a strong touristic value (Goeldner et al., 2003), making them singular, alternative and prestigious spots (Vengesayi & Mavondo, 2004). The social meaning of mountains combines also mountains and sports in an adventurous way associated with specific equipment, facilities and infrastructures (Nepal & Chipeniuk, 2005). Last, but not least, mountains are regarded as affective places capable of inducing strong feelings and emotions to those who visit them or live there. Abstract of chapter three The methodology is described in chapter three with the description of data analysis, which is mainly quantitative, but also incorporates a qualitative element. The validation of the proposed conceptual model was achieved through four procedures of analysis. Firstly, a descriptive data analysis was undertaken with univariate and bivariate analysis, taking into account statistical indicators. The second procedure was exploratory factor analysis (EFA), aiming at determining the relationship between the observed and latent variables. Thirdly, once defined the variables that represent each factor and the number of factors, a confirmatory factor analysis (CFA) was applied using full-information maximum likelihood (FIML) estimation procedures available in LISREL (Jöreskog & Sörbom, 1993). In order to assess nomological validity, measures were tested with respect to some other constructs to which destination image is theoretically related (cf. Churchill, 1995), such as place-attachment and impacts of tourism development. In this sense, the analysis and data processing were performed using the programs SPSS and LISREL, in their latest versions. Finally, and in order to assess content analysis, the adjective words suggested by the respondents were first categorized and then analysed within broader categories, which were treated as dimensions of the mountain destination image construct, with frequencies of occurrence revealing the importance of each dimension. Abstract of chapter four The results are presented in chapter four. From both quantitative and qualitative analyses two different scales resulted: the TMDI, a mountain destination image scale held by tourists; and the RMDI, a mountain destination image scale held by residents. Tourists associated mountains with historiccultural, natural/ecological, social and prestige image elements, sports and leisure and an affective image dimensions. On the other hand, residents perceived mountains as mystique/sacred, life and health, historiccultural, and affective places. Also, tourists and residents were shown to establish different emotional bonds with mountain sites. Due to their temporary permanence, tourists tend to be less territorially bound, consequently revealing less place-attachment than residents. One of the features of modern tourism is the fact that tourist trips, particularly those involving longer distances, disrupt the sense of belonging to a specific place. If tourists do not feel any belonging to the place they visit, they may lose the sense and comprehension of the environmental limits of human action. Likewise, tourists do not reveal a strong sensitivity regarding environmental impacts in mountains. Abstract of chapter five Chapter five presents the conclusions of the study namely: discussion of results, implications, limitations and directions for future researches. Tourism mountain destinations have a particular image and meaning to tourists and residents. Tourism mountain destinations image held by tourists integrate the natural/ecological, social and prestige, historic-cultural, affective, and sport and leisure dimensions. On the other hand, in the perspective of local residents, mountains are regarded as mystique/sacred, life and health, affective and historic-cultural spaces. These results could help assist mountain destinations areas managers in their marketing strategy definition. In fact, effective tourism marketing is impossible without an understanding of consumers' image, while integrated destination marketing leading to sustainable tourism development should also include residents' views. There are still some study limitations to be considered. From a theoretical standpoint, despite the extensive literature review, the study might omit and therefore not consider other specific relevant mountain image dimensions. The ideal would be the application of this conceptual model as an image measurement instrument to all mountain destinations, which however would have been out of the scope of this individual PhD project. In any case, the replication of this study and corresponding extension of the model to other mountain destinations (or other destinations with similar characteristics), particularly out of Europe, would be most interesting for a more general validation. Finally, tourism destination image is a dynamic concept because images are not static but change overtime (Gartner & Hunt, 1987;Gallarza et al., 2002). Therefore it would seem desirable to carry out longitudinal studies that deal with the process of the formation and changes in image over longer time periods.
2019-04-23T13:28:18.772Z
2012-07-01T00:00:00.000
{ "year": 2012, "sha1": "cf6ebcec5f1ebf118747b4d5f9244ce75b7775ab", "oa_license": "CCBY", "oa_url": "https://ejtr.vumk.eu/index.php/about/article/download/108/108", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "954e7ed512cd598cc7768cb43590cda701fe3a8a", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "Geography" ] }
10858130
pes2o/s2orc
v3-fos-license
Influence of lattice distortion on the Curie temperature and spin-phonon coupling in LaMn$_{0.5}$Co$_{0.5}$O$_{3}$ Two distinct ferromagnetic phases of LaMn$_{0.5}$Co$_{0.5}$O$_{3}$ having monoclinic structure with distinct physical properties have been studied. The ferromagnetic ordering temperature $\textit{T}_{c}$ is found to be different for both the phases. The origin of such contrasting characteristics is assigned to the changes in the distance(s) and angle(s) between Mn - O - Co resulting from distortions observed from neutron diffraction studies. Investigations on the temperature dependent Raman spectroscopy provide evidence for such structural characteristics, which affects the exchange interaction. The difference in B-site ordering which is evident from the neutron diffraction is also responsible for the difference in $\textit{T}_{c}$. Raman scattering suggests the presence of spin-phonon coupling for both the phases around the $\textit{T}_{c}$. Electrical transport properties of both the phases have been investigated based on the lattice distortion. I. INTRODUCTION LaMnO 3 is an "A-type" antiferromagnet where the spins are aligned ferromagnetically in the x-y planes and the adjacent planes being stacked antiferromagnetically along the z-axis [1]. When transition metals are substituted in the Mn sites of LaMnO 3 , ferromagnetism is induced [2]. Interestingly, LaMn 0.5 Co 0.5 O 3 , LaMn 0.5 Ni 0.5 O 3 and LaCo 0.5 Ni 0.5 O 3 shows ferromagnetism eventhough their parent compounds LaMnO 3 , LaNiO 3 and LaCoO 3 are not ferromagnetic [2,3]. Similarly, ferromagnetism is also induced when Mn is substituted with nonmagnetic Li + or Zn 2+ which creates Mn 4+ inducing strong ferromagnetism, while when substituted with trivalent ions like Rh 3+ or Ga 3+ causes ferromagnetism which is feeble due to the lack of Mn 4+ [4]. The origin of ferromagnetism in one such B-site doped manganite i.e. LaMn 1-x Co x O 3 (LMCO) was initially proposed to be due to the presence of double exchange interactions [5][6][7] of Mn 3+ -Mn 4+ , while later it was suggested that super-exchange interactions of Mn 3+ -Mn 3+ or Mn 4+ -Co 2+ could be held responsible [2,8]. LMCO undergoes a structural transition from orthorhombic to rhombohedral when x ≈ 0.6 [2,9], where orthorhombic and rhombohedral represents the structure of the parent compounds, LaMnO 3 and LaCoO 3 respectively. Interestingly, for x = 0.5, the LMCO has been reported to have two ferromagnetic phases [2,9]. Initial studies revealed that the sample contained both orthorhombic and rhombohedral phases and gives rise to the two different ferromagnetic phases [2,10]. It has been shown earlier that LMCO with two different Curie temperatures (T c ) can be prepared, where the sample prepared at 700 o C had a T c = 225 K and the other sample obtained at 1300 o C had a T c = 150 K [11]. Based on neutron diffraction studies, it has been proved that the low T c phase that was misjudged as orthorhombic phase actually had monoclinic structure with the space group P2 1 /n and a long range Mn/Co ordering [12]. Similar to B-site doped manganites, suitable A-site doping can also induce ferromagnetism in manganites. The impact of the internal pressure spanned by the A-site ion on the electronically active band formed by the overlap between d orbitals of B-site transition metals and p orbitals of oxygen, have direct control on the overlapping of orbitals, where the internal pressure is initiated by the Asite ions of different sizes [13]. Based on the earlier studies suggested by this explanation, A-site substituted LaMnO 3 which is ferromagnetic, showed different T c , for different dopants having appropriate concentrations [14]. Although the reasons behind the changes in T c for A-site doped manganites have been proposed, the factors responsible for two distinct T c for B-site (Co) doped manganites have not been clearly understood. Based on this fact, the basic theme of our work is to identify the reasons for the occurrence of two ferromagnetic phases in LMCO. In this paper, we evaluate the structural aspects of LMCO which reflect on its intrinsic characteristics. Similar to earlier reports from XRD, our results also support the high T c phase to have rhombohedral structure. But interestingly, from neutron diffraction (ND) we firmly come to a conclusion that the high T c phase also has a monoclinic structure. Neutron diffraction of the high T c phase is reported for the first time. Investigations based on neutron diffraction reveal that the B-site ordering to be higher for the high T c sample. We substantiate that each phase having the same crystal structure has distinct inter cationic distance(s) (d TM -TM ), inter cationic angle(s) (θ TM -O -TM ) and B-site ordering, being responsible for the difference in T c . The two phases of LMCO with distinct d TM -TM , θ TM -O -TM and B-site ordering have been investigated in which the distortion aided changes in transport and magnetic properties are studied. Raman scattering is an extremely sensitive physical tool to examine lattice distortions, spin-phonon coupling etc., and has been extensively used in manganite systems to get the insight into its physical properties. In this paper, we have used Raman spectroscopy to corroborate the lattice distortions observed from the results arrived at from Rietveld analysis to provide conclusive evidence on the structural aspects of LMCO. Based on the temperature dependent Raman scattering studies on both the phases, we verify that there is strong spin-phonon coupling around T c for LMCO for both the structures. In an earlier report [15] a sharp discontinuity in Raman frequency at 150 K, coincident with magnetic transition was assigned to spin-lattice interaction, consistent with our observation of strong spin-phonon coupling in this system. II. EXPERIMENTAL DETAILS The Low T c sample was prepared by solid state reaction with La 2 O 3 , MnO 2 and Co 3 O 4 as the precursors. The samples were heated to 1370 o C and furnace cooled with several intermediate grindings. Preparation of this sample by quenching in Liq. N 2 as reported earlier [10] was avoided since quenching induced strains might result in peak shifts in XRD/ND leading to ambiguous conclusions during Rietveld refinement. The High T c sample was prepared by the glycine-nitrate method. The prepared powder was then heated to 790 o C and furnace cooled with a few intermediate grindings. The oxygen stoichiometry was examined for both the samples by iodometric titration and was experimentally verified to be nearly stoichiometric within the experimental limits. XRD was taken using a Bruker D8 Advance diffractometer with Cu Kα radiation. Neutron diffraction experiments were performed at room temperature using the five linear position sensitive detectors based neutron powder diffractometer (λ = 1.2443 A o ) at Dhruva reactor, Bhabha Atomic Research Centre, India. Raman experiments were carried out using a custom built Raman spectrometer [16]. The LASER excitation used was 532 nm from a frequency doubled solid state Nd-YAG laser with a lower power of 8 mW on the sample. The measurements were carried out in a back scattering geometry from 80 K to 300 K ± 1K. III. RESULTS AND DISCUSSION The crystallographic details of the powder samples were analysed by Rietveld refinement of X-ray diffraction and are shown in figure 1. Both the samples were found to be single phase. The Low T c sample could be indexed as monoclinic (LTcM) with space group identified as P2 1 /n (χ 2 = 3.756; R wp = 2.79 %), while the High T c sample could be indexed as rhombohedral (HTcR) with R3C space group (χ 2 = 3.367; R wp = 2.70 %). However it has to be noted that the high T c sample could also be fitted for P2 1 /n (χ 2 = 4.101; R wp = 2.98 %). In the case of LTcM, a reduction in symmetry is clearly evident from the splitting of the main peak as shown in the inset of figure 1(a). The values of lattice parameters, bond lengths, and bond angles are mentioned in Table 1 We rely more on neutron diffraction in contrast to X-ray diffraction, since the methods based on the later are less sensitive to low Z elements such as oxygen. Thus, using XRD in locating the atomic positions of oxygen atoms becomes inaccurate leading to improper conclusions. The monoclinic structure for LTcM is well supported by the present (χ 2 = 41.7; R Bragg = 6.57 %) and the earlier results [12] based on ND. While in case of HTcR (for which no investigations have previously been done using ND), all observed Bragg peaks in the neutron diffraction pattern could not be indexed with rhombohedral space group R3C, suggesting a lower symmetry for this compound. All observed Bragg peaks can be indexed with the monoclinic space group P2 1 /n, as observed for LTcM, This gives a better agreement between the observed and the calculated (using Rietveld refinement) diffraction patterns (χ 2 = 20.4; R Bragg = 5.9 %). Thus, it is clearly evident that although it is possible to fit XRD results to R3C, ND clearly shows a few peaks being un-indexed which were not even detected by XRD. From the Rietveld analysis, U iso values of oxygen were found to be 0.96 (3) [17,18]. Further discussions on high Tc sample with monoclinic structure will be denoted as HTcM instead of HTcR. It should be noted that the occupancy of all the elements as arrived from the Rietveld refinement of XRD and neutron diffraction shows that the composition of both the phases are the same (within 1% in case of XRD and less than 0.5% in the case of neutron diffraction). Using both the techniques it was also observed that there is a minor deficiency at the B-site for both the samples and both the samples and they are compositionally identical to each other. Figure In the earlier studies [11] it has been reported that orthorhombic phase (T c ~ 150 K) exhibiting an ordered moment (μ) of 4.01 μ B could be assigned either to Mn 3+ high spin (HS, t 2g 3 e g 1 ) -Co 3+ intermediate spin (IS, t 2g 5 e g 1 ) or Mn 4+ -Co 2+ (HS, t 2g 5 e g 2 ) state, while in the case of rhombohedral phase (T c ~ 225 K) having μ of 3.52 μ B the spin state was assigned to Mn 3+ (HS) -Co 3+ (LS, t 2g 6 e g 0 ). Later it was shown to be of Mn 4+ -Co 2+ (HS) character for both the phases while the monoclinic phase contained Co 3+ (LS) [19]. In the present work, in order to understand the magnetic properties, the temperature dependent DC and AC susceptibility measurements were done at a constant magnetic field of H DC =100 Oe and H AC =170 mOe with a frequency of 420 Hz respectively. FC and ZFC magnetisation are shown in figures 3(a) and (b) for LTcM and HTcM respectively. The T c for LTcM and HTcM are found to be 123 K and 232 K respectively exhibiting a "Brillouin like" feature which is a characteristic signature of ferromagnetic materials. When compared to DC magnetic studies (such as SQUID/VSM/AGM), the AC susceptibility is a sensitive technique to probe T c at very low magnetic fields, which could easily point out the presence of more than one magnetic phase. These measurements revealed that the LTcM and the HTcM samples had ferromagnetic transitions at 138 K and 243 K respectively as shown in figures. 4(a) and (b). We have not observed any frequency dependent (75 to 1000 Hz) changes in T c which suggest the presence of ferromagnetic ordering. Though the single phase nature was observed by XRD/ND in LTcM, inset of figure 4a representing χ' -1 shows a small kink around 225 K which may be due to the presence of an extremely small residual high T c phase. In order to understand the change in T c , we have used the analogy from A-site doped systems. In general, T c depends on total angular moment (J), nearest number of neighbours (n) and exchange energy (J ex ) given by Apart from the lattice effects, B-site ordering was also examined from ND. The degree of ordering is estimated from the intensity of (101) and (011) Bragg peaks. Figure 5(a). shows the calculated diffraction patterns for different degree of ordering (0 to 100 %) of Mn and Co ions at the 2d and 2c sites. The integrated intensity of (101) and (011) peaks is found to be higher for better ordering. From experimental data we find that the integrated intensity of HTcM is higher than LTcM as shown in figure 5(b). From the experiments, it is found that HTcM has a very high ordering of 81.2 % when compared to LTcM with 47.6 %, while the same is shown in the inset of figure 5(c) in comparison with the calculated values. In perovskite systems, it has been observed that B-site ordering can affect magnetic properties like Curie temperature [21]. In general, the difference in charge between the cations in the B-site determines better cationic ordering [22]. Earlier it has been shown that the degree of ordering in perovskite systems depend on synthesis parameters such as time [22], temperature [23] (i.e better ordering for more the time/temperature of heat treatment). This convention is not true in case of LMCO. Better B-site ordering is found if the difference in charge between the B-site cations is more than two [21]. Using XAS, it was reported that the residual existence of Mn 3+ in the low T c sample was more than the high T c sample [19]. The presence of Mn 3+ reduces the effective difference in charge of B-site cations less than two and thus deteriorates the ordering. As the presence of Mn 3+ is more in LTcM when compared to HTcM it has a direct effect on the ordering and is well correlated with the results obtained from ND. Jahn Teller distortion exhibited by Mn 3+ can also have an influence in the electronic properties. In order to understand the lattice distortion in detail, both LTcM and HTcM were examined by Raman scattering, while its temperature dependent investigations can be used to examine the spinphonon coupling. From the lattice dynamical calculations (LDC), LMCO with monoclinic and rhombohedral structures were predicted to have 24 and 8 Raman modes respectively [24]. It has been shown that LMCO has similar TM -O octahedral vibrations in comparison to LaMnO 3 (Pbnm) and octahedral fluoride complexes and their predominant peaks at 490 and 645 cm -1 were assigned to Antistretching (ω A ) and Stretching modes (ω S ) respectively [25]. LDC confirms that the peak at 697 cm -1 is assigned to a stretching mode ("breathing"), while the ones at 490 cm -1 is of a mixed type (ω A,B i.e. antistretching and bending). Although LDC suggests that ω S (P2 1 /n) ~ 697 cm -1 , experimentally it was verified to be 645 cm -1 [24]. Table II) from which we infer that ω S is similar as the way the bond lengths are and also θ O-TM-O is 180.0 o for both LTcM and HTcM ruling out the possibility of the octahedra itself being self-distorted. The dependence of ω A,B on temperature is plotted for LTcM and HTcM in figure 7(a), which shows that ω A,B (LTcM) > ω A,B (HTcM) for the entire temperature range. In the case of ω A,B (HTcM), the initial hardening effect is due to ω latt i.e d TM-O as ϕ slightly varies and subsequently relaxes to an equilibrium position ensuring ω A,B becomes almost invariable below a certain temperature. Such kind of initial hardening is found to be less featured in LTcM. In the case of manganese oxides, the MnO 6 octahedra when rotated along the [111] axis give rise to a rhombohedral symmetry, and when rotated along the [110] axis results in an orthorhombic symmetry. Such rotations reduces the θ Mn-O-Mn from ideal 180 o to (180 o -ϕ), for which ϕ(R3C) < ϕ(Pbnm/Pnma) [13], where ϕ is the tilt angle of the octahedra. The octahedra are tilted in such a way that d La -O  has optimal bond length affecting the spring constant of the vibrating system. In our case, d La -O  = 2.646 A o and 2.652 A o for LTcM and HTcM respectively. In general, the tilt angle of the octahedra and d La -O , controls the rotational and bending modes respectively. When compared to HTcM, d La -O  being shorter for LTcM explains the higher frequency of vibrations of the bending mode. Although it is true that for higher values of ϕ (i.e more distortion) ω A,B should be high, it is not observed on our case as both d La -O  and d TM -O  are higher for HTcM and thus reduce the bending frequency when compared to LTcM. Comparisons of t, ω A,B and ϕ at 300 K are shown in figure 7(b). A similar Raman shift dependence on t was evaluated for A-site doped manganites, where t is dependent on the A-site ion [26] which is due to the internal pressure spanned by the A-site ion [14]. While such A-site doped manganites with t > 0.925 had rhombohedral structure, our case of B-site (Co) doped LaMnO 3 , retains P2 1 /n for higher values of t i.e. t = 0.9473/ 0.9470. has an effect on the FM ordering below T c because in case of magnetic materials, Δω(T) depends on lattice expansion/contraction (Δω latt ), anharmonic scattering (Δω anh ), phonon renormalisation of the electronic states that occurs near the spin ordering temperature due to electron phonon coupling (Δω ren ), and the spin-phonon contribution which is caused by the modulation of exchange integral by lattice vibrations (Δω s-ph ) [27]. Δω ren , can be ignored as the carrier concentration is low. In order to understand both the phases individually, we have attempted to study the spin-phonon coupling for both LTcM and HTcM individually. In both the cases of LTcM and HTcM, ω S deviates from the regular dependence [28,29] of ω anh (T) = ω o -C(1+(2/e (hωo/2kBT) -1)) for T < T c , clearly indicating the nature of spin-phonon coupling as shown in figure 8. In the case of LTcM, ω S increases (hardens) as the temperature is lowered. This hardening is due to anharmonicity (ω anh ). Such kind of anharmonic nature is observed with mild deviation around 230 K, which is a clear signature of Δω s-ph for the residual HTcM as inferred from χ' -1 . Such effects are also observed from the FWHM of ω s , which is related to phonon lifetime where there is a slope change around 230 K due to Δω s-ph (HTcM). Below 230 K the behaviour of ω S is driven by the LTcM phase. Interestingly, around 175 K we observe an anomalous behaviour in ω S . This is due to two competing factors namely, strong spinphonon coupling of HTcM and approaching transformation in LTcM. Hence this softening of ω S below 175 K is due to increasing influence of the HTcM phase. Below 125 K, the LTcM develops a strong spin-phonon coupling shown by the FWHM (see figure. 8(a)). In the case of HTcM, anharmonic nature is observed till the temperature is lowered upto 235K (~ T c of HTcM phase). Below 235 K, the behaviour of ω S is anomalous. It shows two distinct changes around 235 K and 125 K. Incidentally, these temperature corresponds with T c of HTcM and LTcM respectively. It is interesting to note that Raman scattering shows evidence of the presence of LTcM in the HTcM phase, not observed by other methods. In order to explain this anomalous behaviour we have looked at the FWHM of ω S , which is related to phonon lifetime. The inset of figure 8(b) shows anomalous behaviour in phonon lifetime around 240 K and 125 K. On decreasing temperature, one expects the FWHM to decrease. This deviation from the expected behaviour suggests the increase in phonon lifetime, which is consistent with the decrease in phonon energy. This is due to a strong spinphonon coupling. The presence of LTcM is the reason for the dual effect. These observations are clear indications of the residual presence of one phase in the other. Residual phase can be detected from its Δω s-ph which could be observed from inelastic scattering, as it is a local probing technique unlike magnetic methods. For a highly ordered system the phonon life time is higher i.e. the FWHM is smaller. From the insets of figure 8 it is clear that the phonon life-time is higher for HTcM as it has better B-site ordering. These results are well supportive to the conclusions arrived from ND. We have also investigated the transport characteristics of LMCO, individually for both the phases as shown in figure 9. For the entire temperature regime, the samples exhibit an insulating nature irrespective of the degree of distortion, with resistivity of ρ(HTcM) > ρ(LTcM). In the case of LTcM, for T > T c , (i.e. 155 to 300 K, the paramagnetic region) Efros-Shklovskii (ES) VRH [30] as given by ρ = ρ 0 exp (T 0 /T) η was observed (with η = 0.5), while we were unable to fit the data to many of the conduction mechanisms such as VRH, polaron hopping and thermally activated transport for the HTcM. This could be understood if one takes a closer look at the bending modes observed by Raman scattering. In the case of the paramagnetic region of HTcM, ω A,B exhibits a continuous change, which might be a factor which doesn't permit the transport to follow any possible conduction mechanism, while in case of LTcM the bending modes are almost same i.e. ϕ is independent of temperature. To have a better understanding of HTcM, one should conduct high temperature transport studies. The ρ(HTcM) > ρ(LTcM) can be understood based on the difficulty faced by an electron to hop from one TM to another due to θ TM Magnetoresistance (MR) upto 11 Tesla has also been measured for LMCO and both the samples exhibited negative MR. LTcM and HTcM samples exhibited a maximum change in resistance of ~ 51 % and ~ 31 % respectively at 125 K as shown in the inset of figure 9. This reduction in resistance in presence of the magnetic field could be due to the augmented probability of electron hopping due to the alignment of magnetic moment of transition metal ions in the presence of a magnetic field. IV. CONCLUSION LMCO with two different ferromagnetic transition temperatures have been studied by AC susceptibility and DC Magnetisation. The difference in the T c for LaMn 0.5 Co 0.5 O 3 has been evaluated and it could be understood on the basis of the degree of distortion present in the two phases. Previous studies on LMCO suggested that it can have both R3C and Pbnm structures. While later investigations proved that the low T c phase is P2 1 /n and not Pbnm, there was no any conclusive evidence for the structure of high T c phase. Our Rietveld analysis of neutron diffraction and Raman spectroscopy confirms that the high T c phase is also P2 1 /n. Neutron diffraction studies also confirm a better B-site ordering in high T c sample and is well supported by the higher phonon life-time observed by Raman spectroscopy. The observed difference in T c for the two phases is due to the lattice distortion influenced by the tilting of octahedra pertaining to the Mn/Co ordering directly affects the exchange energy. The temperature dependent Raman spectra confirm that the deviation of anharmonicity around T c indicates the presence of strong spin-phonon coupling in both the phases of LMCO. We have also evaluated the magnetotransport characteristics of both the phases. Table 2. Results from the Rietveld refinement of neutron diffraction.
2018-04-03T00:20:54.777Z
2010-09-01T00:00:00.000
{ "year": 2010, "sha1": "5e137930f7acd21e9f47189dffc1f2056846895d", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1009.1330", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "550f39d2496156b8d33703e457d8b80e300164da", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine", "Chemistry" ] }
259012546
pes2o/s2orc
v3-fos-license
NEGLECTED AND MISALIGNED: A STUDY OF COMPUTER SCIENCE TEACHERS’ PERCEPTIONS, BELIEFS AND PRACTICES TOWARDS PRIMARY ICT : The present study attempts to explore aspects of teachers’ personal practical knowledge by investigating computer science teachers’ perceptions, beliefs, and practices towards Primary ICT just before a curriculum transition period and the replacement of the former program o f studies with a “Computer Science” curriculum. For the needs of this investigation, an exploratory sequential mixed methods design was employed. Semi-structured interviews were conducted with 33 computer science teachers, while 157 were surveyed by means of a questionnaire developed from the analysis of qualitative data. The findings of the study indicated that there is a misalignment between the policy rationale towards ICT, teachers’ understanding of ICT and the implementation of ICT in the primary class room. Due to teachers’ lack of professional-pedagogical knowledge, contextual factors and policy decisions, which consistently neglect teachers’ needs and personal practical knowledge, CS teachers have developed their own ways of theorizing, conceptualizing and practicing education in ICT. These findings are discussed within the light of the corresponding literature and suggest that structural and curricular transformations in digital education are condemned to carry within them the seeds of their own dismissal when they are not ingrained in the reality of classroom practice and on teachers’ practical knowledge, which entails their involvement in the design and formulation of any intended changes. Introduction Discussions about the integration of digital technologies into the primary school curriculum have a long history.During the past 30 years a significant number of differentiated terms, such ICT capability, ICT literacy, Computer Science, Computing and digital literacy, have been initiated by policymakers and used in their rhetoric about the role and rationale of digital technology's incorporation into education.This is clearly not by chance, as changes in the use of terms signify transitions in the content and the meaning of this integration and denote shifts in the direction of educational policies at national and international levels.As economic, vocational and market-driven priorities evolve, so does educational policy rhetoric and reform associated with digital technologies. Contemporary reconsiderations about the status of digital technologies in education appear to focus on the revival of computer science and argue for the inclusion of computing in both primary and secondary education (Fluck et al, 2016;Passey, 2017;Webb et al, 2017)."A mixture of pressure from industry and lobbying by interest groups has led to resurgent interest in computer science" (Royal Society, 2017: 16).In countries, like England, this renewed interest led to narratives against the former ICT curriculum presenting it as "academically weak" and "vocationally useless" and cultivated (Larke, 2019(Larke, : 1139) its final replacement with the subject of Computing in England's national curriculum.On the other hand, alternative considerations appear to promote a balanced view towards the dichotomy ICT / digital literacy versus Computer Science /Computing.International and European wide frameworks, such as the European Digital Competence Framework (Vuorikari et al, 2022) and the ISTE Standards for Students (ISTE, 2016) seem to endorse the development of various "literacies of the digital" as well as aspects of programming and computational thinking.Following such recommendations several countries have already reformed -or are in the process of reforming-their curricula in order to align their national policies to international trends (Ottestad and Gudmundsdottir, 2018).Apparently, ongoing curriculum reform related to digital education is currently the mainstream tendency across Europe, while curriculum approaches towards the integration of "digital competence" into school curricula vary considerably (Eurydice, 2019), with education systems including it as a cross-curricular theme, as a discrete school subject, as part of the competencies incorporated into other school subjects or as a blend of these approaches (Eurydice, 2019). In short, the diversity of perspectives appears to support the idea that the field of education and digital technologies is going through a transition stage, which is as awkward, challenging and uncertain, as any in-between period of change.On the other hand, shifting periods are also times for reflection, for in every transition there is a dialectic between continuity and change, which needs to be thoroughly analyzed and properly understood before any structural or other kinds of transformations take place (Wickham, 2010).The process of accounting for this dialectic and identifying the roots of change inside continuity goes beyond the scope of this paper.However, a critical dimension in this dialectic, which is largely neglected in contemporary discourses about reform in the area of digital technologies is concerned with teachers' beliefs, understandings and implementation of past and present policies.Narratives of digital education curriculum development are mainly presented by theorists and subject specialists, are almost always dehistoricized and remain substantially in the sphere of influence of policymakers and administrators.Nevertheless, what is often not accounted for is that these policy directives and initiatives, which are created without consideration of the research data from the classroom field, without reflection on the history and the realistic implementation of past efforts and to a large extent without the involvement of educators, are bound to be fragmentary and disputable.Due to teachers' enactment upon the externally imposed "intended" curriculum, the distance between what policymakers and administrators envisage and what students realistically experience may be quite considerable (Gerhke, Knapp and Sirotnik, 1992).Curriculum filtering, modification and adjustment are expected to take place as teachers interpret official mandates through their personal practical knowledge (Craig, 2011;Clandinin & Connelly, 1992;Priestley & Phillipou, 2018), which is found in their professional practices and may be perceived as the particular way "in which anyone teacher reconstructs the past and the intentions of the future to deal with the exigencies of a present situation" (Connelly and Clandinin, 1988: 25). Within this context, the present study attempts to explore aspects of teachers' personal practical knowledge just before the beginning of a curriculum transition period by investigating the experiences and beliefs of a challenging population with distinctive attributes.In particular, the study focuses on computer science teachers' views of, beliefs about and implementation practices of the Greek ICT Curriculum for Primary education, which according to government mandates will be soon replaced by an "ICT and Computer Science" curriculum. Policy background -Literature review The incorporation of digital technologies into the Greek school curriculum and the process of their integration may be clearly associated with the multidisciplinary orientation and academic tradition of the Greek curriculum.Thus, the official policy embraced and implemented since the 90's was and still is circumscribed to the inclusion of distinct subjects with their own uniform and compulsory curriculum at both primary and secondary levels of education that are taught by computer science (CS) teachers. With respect to primary education, the latest in effect curriculum dates back to 2010, when a subject with the title "Technologies of Information and Communication" was introduced to the compulsory program of studies, that was to be taught for two hours per week to all primary grades by subject specialist teachers.However, this subject differed in a number of ways from previous developments in the area, as well as other national curriculum subjects.Firstly, following EU recommendations and international tendencies at the time, the 2010 ICT Curriculum endorsed an 'ICT Literacy" perspective and focused on the development of digital skills as well as the enhancement of learning capabilities through the use of digital tools, services, and equipment.In terms of subject matter, the curriculum was structured across four strands, namely the use of ICTs as cognitive tools, as means of problem-solving, as technological tools and as a social phenomenon (MoE, 2010;2011).Secondly, the instructional guidelines accompanying the curriculum anticipated that the subject was clearly oriented toward laboratory work and should be delivered through practical activities and cross-curricular projects.It was recommended that students should be active, while learning experiences should be meaningful and contextualized, in the sense that they should be drawn from and be related to the contents of all primary curriculum subjects.Student progress would be evaluated through formative methods of assessment.Thirdly, in contrast to all other primary school subjects, the delivery of the ICT curriculum was not supported by a uniform, official and compulsory school textbook (MoE, 2010;2011;Jimoyiannis, 2011;Piliouras et al., 2010). In summary, the delivery of the 2010 ICT Primary Curriculum may be perceived as a challenging and demanding enterprise for it required considerable expertise in instructional design.Among others, the process of teaching the subject expected from CS teachers: (a) to be familiar with all primary subjects in order to design interdisciplinary and contextually meaningful learning activities and cooperate with primary teachers, (b) to have knowledge and skills in the design and implementation of project-based learning and its many manifestations, (c) to be able to transform ICT curriculum goals into everyday learning experiences and organize subject matter into short, medium and long term planning schemes of work, (d) to have knowledge and experience in designing, organizing and creating efficient and effective educational materials, and (e) to be aware of formative methods of assessment and evaluation, forms of interdisciplinarity and learner-centered methods of teaching.However, only a few highly skilled primary teachers in the country would have been in position to fulfill such expectations.Due to long-term education policies that promoted teachers' "deskilling" (Apple, 2017;Apple and Jungck, 1990), the vast majority of teachers have been alienated from the process of planning and designing everyday learning experiences, as they implement the learning activities contained in the official textbooks provided and approved by the Greek state.Considering this, one may clearly realize the difficulties that CS teachers would encounter in delivering the subject, for an additional reason.Subject specialist teachers in the country are university graduates in a relevant field of specialization, having little or no studies in education sciences (OECD, 2017;Liakopoulou, 2011).In particular, Greek CS teachers may be described as a population with outstanding university qualifications holding a Bachelor's degree in Computer Science, Information Technology, Computer Engineering or STEM related disciplines, yet have little education on curriculum and instruction, educational theory, classroom pedagogy and the organization, culture and context of primary education.Moreover, within the context of a regularly adopted measure of the Greek education authorities [through which the surplus of teachers in one education level is minimized without creating new teaching posts needed in another (Stylianidou et al., 2004;OECD, 2011)], the vast majority of primary CS teachers have been transferred in 2013 (ΜοΕ, 2013) from secondary schools to elementary education posts. After the establishment of the 2010 ICT curriculum till the autumn of 2021, no significant policy changes have taken place, except the publication of a ministerial order in 2016 (MoE, 2016), that reduced the instruction time allocated to the ICT subject from two hours to one hour per week across primary education.In the years following the introduction of the Primary ICT subject, few small-scale studies have been conducted with the aim to describe CS teachers' characteristics and experiences with the subject.The results of these studies may be summarized as follows: a) The great majority of Primary CS teachers lack pedagogical competence qualifications (Mpelesiotis et al, 2013).They perceive professional pedagogical knowledge as necessary and require training in teaching methods, pedagogical subject knowledge and classroom management (Panselinas et al, 2015;Gogoulos et al, 2011;Mpelesiotis et al, 2013;Vloutis, 2018;Theodorou, 2012a;Daraviga, 2018;Tziafetas et al, 2013) b) Most teachers reported that they use the ICT curriculum in preparing instruction (Panselinas et al, 2015;Gogoulos et al, 2011), yet they perceive it as a bureaucratic document that they are "obliged" to use (Kallivretaki, 2016).Teachers face considerable difficulties in teaching the subject and adapting to the needs of primary school children (Terpeni et al, 2014).There is increased variation in both the perception of the ICT curriculum and its delivery (Kallivretaki, 2016;Fessakis & Karakiza, 2011;Tziafetas et al, 2013;Drenoyianni, 2014;Bekos, 2021) c) The most significant barriers that teachers report are regarded with the lack of official school textbooks and instructional materials for the delivery of the subject, limited instructional time, lack of adequate equipment and dated infrastructure, high student/computer ratio, high student/teacher ratio, and lack of technical support which increases their extracurricular duties (Panselinas et al, 2015;Gogoulos et al, 2011;Mpelesiotis et al, 2013;Vloutis, 2018;Michalakopoulos, 2021;Kallivretaki, 2016;Tziafetas et al, 2013;Terpeni et al, 2014;Bekos, 2021) d) According to teachers, the teacher training they received while in service was inadequate, as it was mostly theoretical with limited practical examples and hands-on activities (Theodorou, 2012b;Trapsioti, 2009). Even though the results of the above studies may seem relevant to the Greek context alone, in reality, other studies conducted in different parts of the world and within a different policy framework appear to report similar concerns.Teachers' perceptions of the ICT/Computing curricula are moderate and variable (Vanderlinde & van Braak, 2011;Barnes & Kennewell, 2017;Larke, 2019;Royal Society, 2017).CS teachers lack educational expertise whereas teachers without a background in CS lack sufficient subject-specific technical knowledge (Royal Society, 2017;Bottino, 2020;Sentance & Csizmadia, 2017).Poor classroom facilities and resource-related challenges are reported (Sentance & Csizmadia, 2017;Mc Farlane, 2019), and gaps are found in professional development opportunities (Royal Society, 2017;Sentance & Csizmadia, 2017;McFarlane, 2019;Larke, 2019).Teachers express low confidence and low self-efficacy and require detailed lesson plans, more structured curricula and suitable resources (Stringer et al, 2022;Lockwood & Mooney, 2017;Royal Society, 2017). The challenges identified in relevant research efforts, appear to affirm that the integration of digital technologies into the school curriculum requires systemic and structural changes, considerable investments and well-thought-out policies, firmly consolidated in the reality of educational practice.In this reality, national curriculum tradition and ideology, established structures, organizational contexts, recruitment routines and teachers' pedagogical beliefs, professional competence and occupational status may be perceived as multilevel filters which translate and transform policy visions and affect policy understanding and implementation in practice. Materials and Methods Within the context of the aforementioned framework and in recognition of the immense role of teachers' personal practical knowledge in the employment and application of policy initiatives, the study presented in this manuscript investigated teachers' perspectives towards the ICT curriculum, their conceptualizations of ICT literacy and their self-reported classroom implementation of the ICT subject.For the needs of this investigation, an exploratory sequential mixed methods design was employed (Bryman, 2017;Creswell & Plano Clark, 2018).Therefore, the study was performed in two successive, yet synergistic phases. The initial phase was focused on the collection of qualitative data.Semi-structured interviews were conducted with a quota sample of 33 computer science teachers who were serving in tenured primary school posts throughout the country.The interviews lasted approximately 2 hours each and were guided by an interview protocol.The protocol's questions were divided in three main topics of discussion concerned with the description of teachers' classroom practices, teachers' perceptions of the ICT curriculum and teachers' constructs of the term "ICT Literacy".The narrative data collected were then analysed through the employment of thematic analysis (Bryman, 2017).The codes, the categories and the themes identified in the analysis were used as a basis for the construction of a questionnaire, which was in turn utilized in the following phase of the study. In the second phase of the study, quantitative data were collected.The questionnaire developed from the analysis of qualitative data was self-administered and delivered by means of an online survey.Despite the researchers' intentions to collect census data, the final number of teachers who participated in the survey and filled in the questionnaire was 157 computer science teachers.The survey participants represented 12,5% of the total population (N=1257) of primary CS teachers in the country.The questionnaire contained 40 main questions, which were divided in 6 thematic areas: Background and demographics, content of instruction, methods and forms of instruction, educational materials, barriers and challenges, perceptions of the ICT curriculum. Quantitative data were analysed using descriptive statistics and in turn, they were merged with the original qualitative data in order to be summarized and interpreted. The participants' profile The total number of computer science teachers taking part in both phases of the research was 190 (33 in the qualitative and 157 in the quantitative).The great majority were aged between 30 and 49.The average age of the respondents was 42 and only one participant was younger than 30.A high percentage of teachers had been teaching in Primary education for less than 9 years (88 %), yet they had considerable teaching experience in secondary schools.91.5 % had been serving in various secondary education posts before transferring to primary education.As regards teachers' education background, 88.5% held a CS or IT qualification (Table 1).In general terms despite the non -representativeness of the sample, its distribution appeared to mirror the synthesis and the characteristics of the total population in terms of gender, graduate studies and geographical region of service. Teachers' conceptions of "ICT Literacy" One of the main issues raised in the discussions with CS teachers regarded their personal and individual conceptualizations of the term "ICT Literacy".Despite the interviewer's attempt to guide the conversation towards this direction, almost all participants avoided explicit references to the defining characteristics of what it means to be literate in ICT.Instead, teachers preferred to discuss in detail their views on the primary ICT school subject.In particular, teachers' viewpoints could be divided into three central categories of thought.Half of the interviewees supported the idea that the primary school subject of ICT should be changed and be understood as a subject directly related to computational thinking, algorithms, coding, programming and STEM education. "First of all, I would change the name (of the subject).I would call it computer science.We have discussed this many times with the children.Finally, a few teachers voiced a combination of rationales by stating that both ICT and computer science are important and should co-exist in primary education.Some suggested that younger students should become familiar with computers and the use of various applications, while older children may focus on programming and computational thinking.Others commented that all students should attend a compulsory ICT skills development course, and the ones who develop a special interest for the subject, should be given opportunities to attend programming and CS courses. "In lower grades (students) should be given incentives so as to use the computer with no fear and use it properly… In the 3rd and 4th grades basic concepts like user interface, photo editing, video editing, multimedia applications (should be taught). In higher grades, web 2.0 applications seem to me important as well as programming, algorithms and robotics." (Teacher 2) "There should be a division between computer use/ICT and Informatics as a science.That is what is needed in school, especially primary school.Both should be taught simultaneously."(Teacher 30) The survey data collected appear to confirm the existence of these three viewpoints about the primary ICT subject (Graph 1).Even though the percentages of teachers supporting each position differ significantly from the ones reported in interviewing data, it is still notable that teachers seem to be divided between an ICT versus Computer Science perspective. Graph 1: Teachers' viewpoints about the focus of the ICT subject Nevertheless, the key issues that need to be raised are concerned with teachers' actual perceptions of ICT literacy.Participants seemed unaware of the term or unfamiliar with its dimensions in the corresponding literature.Their main frame of reference to "ICT Literacy" appears to originate from a loose perception of the primary ICT subject.Within this line of reasoning when they were asked about their thoughts towards the concept of ICT literacy, their responses focused on the contents of the primary ICT subject.Furthermore, evident in the narratives offered is the tendency to trivialize and somehow downgrade the wide-ranging competencies entailed in ICT literacy by equating them with the acquisition of simplistic skills in operating computers and generic applications.On the other hand, teachers who supported the development of digital skills kept on emphasizing that this was appropriate in the context of primary education alone.In summary, teachers' views of the subject and in turn their indirect understandings of ICT literacy were not in alignment with either the contents and the scope of the official ICT curriculum or the related literature.In fact, they illustrate a number of misconceptions about both the curriculum subject and the meaning of being literate in ICT and these misconceptions seem to endorse an unnecessary and unproductive divisiveness among the multifaceted options and dimensions of both ICT literacy and computer science.Which should be the focus of the ICT primary subject? Teachers' views of the ICT Curriculum A part of the discussions undertaken with CS teachers was focused on their judgments towards the contents and the methods described in the official ICT curriculum.The vast majority of the teachers interviewed reported that they espoused the contents of the ICT curriculum to a high degree by stating that they implement around 70% of the subject matter described in it.Questionnaire data also illustrated that the total average degree of compliance to the contents of the ICT curriculum was quite high, yet responses were also variable with 28 participants declaring that they implement 50% or less of the curriculum and another 50 teachers reporting a medium-level of adoption (Table 2).A minority of teachers expressed feelings of rejection and refusal towards the curriculum, while some others considered it quite adaptable and assumed that it was not obligatory to follow it.However, the great majority commented that even though they take into account the curriculum recommendations, in practice they need to adapt its contents to their students' needs, to the particular conditions of the school they are working into and their own professional needs and perspectives. "It (the curriculum) didn't impress me.I do not remember it, that is how much interesting I found it.To tell you the truth the program of studies is not followed.Within the way that I teach, I'm not interested in the curriculum."(Teacher 18) "The primary computer science teacher is free to launch his or her own program of studies. There is great freedom, because the ministry proposes, it does not impose whatever exists in it (the curriculum).So, you are autonomous, you can create your own program of studies and I think that more or less everyone implements his or her own curriculum" (Teacher 9) "Surely, the program of studies is not strict.It depends on the school conditions.When I first started at this school, I found lots of weaknesses.Subject matter that was to be taught in 3 rd or 4 th grades, I started teaching it in 5 th and 6 th grades… there isn't a strict curriculum like the one we used to have in lower or upper secondary education."(Teacher 14) "In general, the ministry's mandates are great in theory, but I had to adapt them to my program of teaching."(Teacher 7) The medium degree of compliance with the ICT curriculum and teachers' reported practice of developing their own study program may be directly related to their beliefs and opinions about the curriculum.When they were asked to determine its quality and judge its value through the use of adjectives, many participants initially commented positively on its attributes and characterized it as "comprehensive", "suitable", "appropriate", "properly oriented" and "well-thought-out for primary school ages".However, this preliminary favorable judgement was accompanied by secondary thoughts raising concerns.The objections expressed pertained to the adequacy and applicability of the subject matter.According to teachers, some units, such as the ones related to concept mapping, the defining characteristics of technological terms, or the use of spreadsheets could not be delivered either due to younger students' incapacity to understand or because of students' lack of interest.Thus, the curriculum needs to be updated and modernized.Furthermore, learner-centered forms of teaching cannot be applied due to the lack of adequate instruction time, while limited infrastructure and outdated equipment represent additional obstacles.Moreover, one of the most frequently reported barriers was concerned with the organization of subject matter in the curriculum and the lack of teaching guidance. "How can you explain to children what a concept map is? I've tried (to explain) once. I did not attempt it for a second time." (Teacher 3) "Children cannot manage email accounts when they are young."(Teacher 5) "It (the curriculum) should be fleshed out with computational thinking, computer science, algorithms and programming in higher grades."(Teacher 29) "Teaching time is not enough.The subject matter is too extensive for the time allocated to the subject.The organization of time and subject matter is completely wrong.How can you implement projects with one hour per week?" (Teacher 22) "If there isn't an appropriate computer lab, the subject matter cannot be delivered."(Teacher 6) "The program of study is confusing. The ministry developed a program without considering students' background, logistics and technical support. All these are upon us" (Teacher 14) "What bothers me is that the same content is taught in different grades.What is the meaning of this practice?The creation of a website is repeated in 3 rd , 4 th , 5 th and 6 th grade.I want this to become more specific, to know what exactly to teach per grade."(Teacher 24) "Every teacher needs to create his own activities, worksheets and lesson plans.All of these belong to me.What was the support that the ministry provided to me?The teaching objectives alone.The ways and the know-how of doing all these were left to me." (Teacher 33) Evident in the comments above is teachers' difficulty to identify differences in the formulation of learning objectives, to understand the spiral design of the curriculum by implementing the same contents with deepening complexity or in different situations and conditions, and organize subject matter into short-term and long-term action plans to ensure balance, continuity and progression in its the delivery.Due to the lack of professional-pedagogical competencies in instructional design, many teachers perceived the curriculum as repetitive, chaotic and impracticable and commented that it should be accompanied by a uniform school textbook, which would help in clarifying what exactly needs to be taught, at which grade, and with what ways.Survey data appear to confirm this request as 86 participants (54.8%) reported that the guidelines provided for the organization of subject matter per grade are obscure and 122 teachers (77.7%) declared that the development and use of an official textbook is necessary.Moreover, the quantitative data collected affirm the idea that teachers judged the quality of the curriculum as average in terms of applicability (Mean= 1.8 out of 3), sufficiency (Mean = 2/3), suitability (Mean = 2,3/3) and general satisfaction (Mean = 2,1/3) (Graph 2). Graph 2: Teachers' judgements of the ICT curriculum In conclusion, 133 CS teachers (84.7%) stated that the official ICT curriculum needs to be modified, for it cannot be considered satisfactory in its current form. Classroom implementation practices of the ICT curriculum Teachers' tendency to equate ICT literacy with the acquisition of lower-order technical and operational skills, similar to the ones developed in ECDL-like courses, was reflected in their self-reported teaching practices and their descriptions of the subject matter taught per grade.In the qualitative data collected teachers' references to the content of instruction were focused on specific software applications.As such when questioned about what they teach in each grade the vast majority responded that 6-and 7-year-old children are familiarized with the basic use of a computer system, learn about ergonomics and are taught how to use painting software.In the 3 rd and 4 th grades, they teach word processing and a bit of concept mapping, while 10-and 11-year-olds are taught how to create a PowerPoint presentation, how to operate a spreadsheet in Excel, how to manage an email account, use browsers and search engines, as well as how to use multimedia editing software.Little emphasis was dedicated to the development of technological knowledge and most teachers reported that technological concepts and terminology cannot be properly taught and understood by children younger than 8 or 9. "In higher grades, 5 th and 6 th , I use the middle school textbook to explain the internal parts of a computer and clarify in mechanical terms what a computer is." (Teacher 27) "Familiarization with the keyboard, to learn how keys are arranged … the objective is to feel at ease with the mouse, to use the keyboard to type in Greek and English, punctuation, intonation."(Teacher 11) "With painting software, we work on specific exercises.We may give them a pattern and ask them to continue it… we may paint pictures about the seasons or our city."(Teacher 16) "I teach them how to make a table, put pictures in cells or information, titles, to edit the table …we created, for example, a meal plan for the week or our school schedule."(Teacher 21) Clearly, responses with respect to curriculum coverage indicate the adoption of a tool-based or application-based perspective towards the subject.However, this kind of perspective is not in alignment with the recommendations of the ICT curriculum, which recognizes the necessity of technical and operational skills, yet identifies literacy in ICT as a mixture of cross-curricular cognitive processes and higher-order abilities, and proposes a problem-solving approach and an interdisciplinary project-based strategy towards their instruction. On the other hand, approaches to programming appeared to be more playful.Almost all CS teachers interviewed and 80% of the survey participants reported that they teach programming skills using visual programming environments, such as EasyLogo, Kodu and Scratch.Children's familiarization with programming may begin as early as 6 or 7 years of age, but the vast majority of teachers seem to start teaching programming from 4 th grade and on.According to participants' accounts the development of programming skills focuses on the analysis of solutions to a logic problem using a blend of three programming structures, namely the sequence structure, the loop structure and the selection structure.Many report that programing instruction is based on students' wish to learn how to make games. "We do a lot of Scratch in 6th grade… how the environment looks, that is divided in 3 spaces, to understand what we can do in each space, what are the instructions.It took me 1-2 hours.Then I them simple things, such as how to draw a rectangle, how the cat moves and turns, some small moves."(Teacher 11) "Children want to make games…so through this we learn how to use events, instructions and the rest.I always try to use students' wish to learn how to construct games." (Teacher 8) "In the 1st and 2nd grades I talk to them about computer science concepts, but in a friendly way. I use the robots in bitsbox.com, so once a month we do this. I bring in class some activities and we work initially with symbols and later with numbers, because the children do not know much about writing." (Teacher 27) A few teachers indicated that they had experimented with the use of educational robotics, but as they commented, even though activities with robotics kits are extremely interesting, they require excessive time, plenty of resources and expensive specialized equipment that does not exist in schools.Lack of appropriate teaching time, lack of adequate resources and equipment and increased number of students have been reported as major obstacles affecting the efficiency and effectiveness of teaching the subject.As the CS teachers reported, it is nearly impossible to implement project-based approaches within the limit of an hour per week (80% of survey participants).Consequently, their teaching approaches are quite traditional, following a simplified version of direct instruction.Within the frame of a deductive approach which fosters the quick development of psychomotor skills, most lessons begin with a demonstration, or an exemplary execution accompanied by an explanatory narration and continue with task assignment that is related to the introductory demonstration and entails the completion of several lab exercises. "I usually have a short worksheet, I explain the objectives, what we need to do, what is the purpose of our assignment. I may start with a short presentation and then I let them work on their own." (Teacher 17) "I demonstrate it on the projector and then the class must do an exercise on the skill that I showed them."(Teacher 3) According to survey data, this kind of lesson structure is followed by 90% of teachers.The implementation of other formats of teaching, such as questioning and forms of discussion, is reported by an equally significant number of teachers and their utilization seems to run across all phases of instruction.Video presentations as well as forms of dramatization and role playing seem to be used when the subject matter taught is related to understanding technological terms, internet safety and concepts or programming instructions. "Internet safety…we will do some role-playing, we will assign roles to students, one is the victim the other is the perpetrator.The lesson becomes more experiential, rather than theoretical."(Teacher 32) "In the beginning, I showed them on the projector, I explained the instructions, the algorithms and then I pretended to be a robot and I asked them to give me orders so as to make me move to the door of the classroom."(Teacher 20) With respect to project implementation, many of the teachers interviewed referred to them as one of the most valuable and meaningful approaches to teaching at the primary level, which they cannot utilize as much as they would like due to teaching time limitations and lack of resources.However, when they were asked to describe examples of the projects, they have implemented they referred to forms of classroom work that are directed by the teacher, may be thematically related to one or more school subjects, last much longer than exercises and activities and give students the opportunity to practice skills they have acquired in previous courses.Most -if not all -of the project examples offered in the discussions with teachers entailed searching, organizing and presenting information about a theme (eg.recycling, Olympian gods, touristic attractions of a city, climate change) or collecting data, analyzing them and presenting findings (e.g.collecting data about injuries at school, gender differences, bullying). "In groups of 2-3 they chose a theme of their preference, like the 10 biggest cities of the world, the most beautiful islands in Greece, a theme about ecology or recycling…and they presented their work for other children to see…" (Teacher 16) "In the 5th and 6th grades, I teach Excel and by the end of the year, we conducted research on issues related to schooling.We administered questionnaires then the children collected them and produced graphs and tables.We also used Word to write about the project and PowerPoint to present it."(Teacher 34) Projects were mainly understood as a long-term activity that gave the opportunity to practice skills already taught through direct instruction.Their relation to other school subjects was largely superficial and did not provide the context for meaningful use of ICT tools and applications.In other words, they were not related to problem-solving forms of project-based learning, as they did not require the application of a technology solution to a real-world problem, or a realistic situation and they were heavily guided and directed by the teacher. As regards students' assessment CS teachers expressed worrying views related to the status of their subject in primary education.Most of them admitted that the context of primary education is quite different from the one they were used to in secondary schools and reported that student assessment is not systematically implemented (77.7% of survey participants).Their subject is perceived as subordinate, the logic of exams and testing is not acceptable and the appraisal of student performance with grades lower than "excellent" is almost prohibitive and is negatively received and remarked by both parents and students.Parental interest for children's performance in the ICT subject is extremely low. "Anyway, the issue of assessment at primary school is for me an unsuccessful procedure.That's why all get an A. Ιt is deteriorating.Especially in the subject of computer science, as in all other subjects taught by subject specialist teachers, parents and children believe that we do not have the right to put a grade lower than A. If they see a B they feel alienated" (Teacher 12) "A minority of parents come and ask me to describe what I think about their child.This is justified by the fact that I teach them for only an hour per week" (Teacher 15) On the other hand, CS teachers appeared confused about the scope and the methods of student performance evaluation and expressed contradictory thoughts about assessment.Some of the views reported illustrated a lack of familiarization with the curriculum recommendations and claimed that there are no official guidelines for evaluation or that "every teacher applies his or her own teaching practices and naturally his or her own assessment methods".Furthermore, and in conflict with the official guidelines, the vast majority agreed with the view that grading (summative evaluation) is more significant than evaluation of student progress and growth (formative assessment).More than half of them agreed that student assessment is not important in the case of primary school children.Yet, at the same time, an equivalent percentage of teachers believed that assessment should be performed in a systematic way (Graph 3).In conclusion, the data collected appear to support the idea that CS teachers have developed their own ways of understanding, conceptualizing, implementing and practicing education in ICT.Their lonesome ride is reinforced by the fact that their collaborations with other subject specialists and primary teachers within the school organization, -if and when they exist (44% of teachers surveyed) -, may be described as weak, trivial and incidental.However, the main reasons are founded on the feelings of powerlessness and abandonment they encountered after their transfer to primary education. "I serve 13 years in education.No one, not a headteacher, not a primary teacher, not a computer science teacher, not a school advisor has ever come to ask me "What do you teach in your classroom?"Not even once.That for me is paradoxical.It is a curse.This fact alone illustrates the level of interest that education authorities have for children's education."(Teacher 32) "There is no recipe, and everyone works based on his or her intuition, unfortunately… I have functioned more with intuition, instinct and experience.Little children are unpredictable and do not express their thinking."(Teacher 10) "I didn't have a problem with subject knowledge…We read, we struggle, we help each other, we found a way of dealing with the subject, but no one has informed us about what shall we do in the classroom, how primary school works, how it operates.That looked like mountains for me."(Teacher 28) As most teachers admitted in both questionnaires and interviews, the lack of professional development regarding curriculum and instruction, the context of primary education, the management of young children, practical professional knowledge on child psychology and pedagogy, combined with the indifference they experienced from the Teachers' views about student assessment Agree or Strongly Agree Disagree or Strongly Disagree public administrators and educational authorities have been and still are major obstacles and barriers to the delivery of the ICT subject. Discussion of findings -Recommendations The investigation of teachers' beliefs, perceptions and practices towards the primary ICT subject appears to support the idea that there is a misalignment between the policy rationale towards ICT, teachers' understanding of ICT and the practice of ICT in the primary classroom.CS teachers avoided discussions about the concept of ICT literacy per se.From their indirect comments on the ICT school subject, it may be inferred that they are not familiar with the term's conceptual dimensions.In their perceptions, ICT is equated with mechanical familiarization with a set of low-level psychomotor skills, while their practices endorse this perspective by adopting an application-based rationale.Within this context, opinions regarded with the school ICT subject appear to be dichotomized, as half of the teachers endorse a tool-based perspective as suitable for primary children and the remaining half support the idea of a computer science perspective.Teachers' loosely defined ideas about literacy in ICT and computer science have been reported in other studies (Kallivretaki, 2016;Kordaki & Komis, 2000).Some also report the espousal of traditional teaching in the delivery of the subject (Kallivretaki, 2016;Theodorou, 2012a;Tziafetas et al, 2013), difficulties in the adoption of a critical, cross-curricular approach (Tziafetas et al, 2013), and mixed beliefs about student assessment (Fessakis & Karakiza, 2011).Similarly, the teaching approaches described in this study may be characterized as traditional and deductive related to direct instruction.On the other hand, instructional activities are hands-on and laboratory-based, while the development of programming skills is widely appreciated and practiced.Yet, the forms of interdisciplinarity applied are superficial, opinions about assessment are confusing and project-based work is understood as a long-term activity suitable for practicing lowlevel skills instead of supporting a problem-solving rationale.The explanations provided in the literature (Kyritsakas, 2008;Theodorou, 2012a) target on CS teachers' lack of pedagogical competence, which causes them to operate on the basis of their own autobiographical memories and to adopt the teaching style and the norms that they themselves experienced as students.Most perceive themselves not as teachers who deliver a school subject, but as professionals in science who teach it (Kyritsakas, 2008).As a consequence, they appear to construct their own arbitrary versions of meaning for the school curriculum, their own teaching theory and individualistic interpretations of teaching methods and learning approaches (Kallivretaki, 2016;Theodorou, 2012a;Trapsioti, 2009).The findings of this study seem to support these interpretations.On the other hand, the majority of teachers indicated a number of obstacles in curriculum delivery and admitted that children's needs and school contextual constraints force them to adapt and modify it.Many reported partial compliance with the curriculum and declare that due to the conditions experienced in primary education they feel compelled to formulate their own program of studies. Admittedly, a part of the misconceptions and misalignments identified in the present study may be related to teachers' lack of professional-pedagogical knowledge.Yet, the role of contextual factors and policy directives, which consistently ignore teachers' practical knowledge in combination with the absence of appropriate professional development opportunities, needs to be considered in this equation.Research suggests that teachers' practical knowledge may be perceived as a construct formed by three main constituents: (a) teachers' past experiences, (b) teachers' perceptions of the current teaching situation, and (c) teachers' vision of what teaching should be like (Duffee and Aikenhead, 1992;Elbaz, 1983).Our CS teachers' views, beliefs and implementation practices towards the Greek ICT Curriculum for Primary education, may be partly understood and interpreted through the lenses of these three landscapes of thought and experience. CS teachers' past experiences in education and teaching may be directly associated with their ways of knowing and may be held responsible for the development of values, beliefs and rules of practice (Connelly and Clandinin, 1985).Their exceptional qualifications in Computer Science and Information Technology in combination with their lengthy experience in secondary schools and the shortage of professionalpedagogical knowledge, may be reflected in the difficulty to conceptualize literacy in ICT, the endorsement of programming coupled with the need to change the title of the ICT subject, the inability to understand curriculum design and content and the tendency to implement practices that are well known from secondary schools.The influence of these past experience projections in combination with the influence imposed by the expectations of the current teaching situation of primary education, appear to form teachers' instructional decisions.Requirements related to the official curriculum directions and constraints related to the primary school environment (limited teaching time, lack of appropriate equipment, young children's diverse abilities and needs and inadequate professional development), lead teachers to adopt an experiential and intuitive teaching profile, to develop their own definitions of both pedagogical and ICT related terms and to translate the 'intended' policy guidelines into an array of personally constructed differentiated practices which may range from simplistic skills acquisition to computer science concepts acquirement.Apparently, computer science perspectives towards the ICT subject are largely related to teachers' educational background and represent their vision of what teaching should be like in the primary subject, which in turn fuels decisions on instructional practice.Yet, for a significant number of other teachers having the same educational background their vision of the ideal course has been negotiated through the experience gained within the context of teaching primary school children.As a consequence of teaching the subject, teachers' new contextualized experiences interact with older constructs formed by their background knowledge (Dufee and Aikenhead, 1992;Arzi and White, 2008) to create different visions related to the tool or application-based perspectives which are perceived as more suitable for the primary school context.These visions are necessarily reflected on these teachers' classroom practices and motivate them to experiment with more active instructional approaches, such as forms of project teaching, game-like activities and formative assessment. So where does this analysis lead us to and why is it important?Teachers' practical knowledge is a fundamental factor in the improvement of educational practice (Connelly et al, 1997;Ross & Chan, 2016).Teachers are not the mere executors of the programs of study, the guidelines and the materials that subject specialists and curriculum designers have developed far from the classroom.They are key stakeholders, who negotiate meaning and enact upon the official mandates through their personal, professional and educational frames of reference.Analysing these frames of reference and understanding the process and the ways in which they affect teachers' beliefs, instructional decisions and everyday classroom practices is of immense importance for the improvement of teaching and the identification of what education students receive.Through this kind of analysis adequate and more effective professional development opportunities can be designed, teacher recruitment practices may be better informed and enhanced, and educational policy may be planned in ways that involve teachers "by working with them rather than on or against them" (Clandinin, 1985:383).In this study for example, the authorities' decision to transfer computer science teachers with no pedagogical competence and without appropriate preparation and support from high schools to primary schools to do the job of a highly skilled educator was unfortunate; and unfortunate decisions involving teacher workforce are bound to cause confusion, misconceptions and necessarily misalignments of practice with the official policy guidelines, such as the ones described in this study and others (Barnes & Kennewell, 2017;Larke, 2019;Royal Society, 2017).On the other hand, policy decisions related to curriculum changes may end up in equally devastating outcomes, especially when these decisions are not ingrained in the reality of current classroom practice and do not consider teachers' practical knowledge, by involving them in the design and formulation of the intended changes.One may imagine what kinds of new misconceptions and misalignments may be raised by the replacement of the Greek ICT curriculum with the academically oriented, rigorous and significantly more demanding "Computer Science" primary curriculum (Drenoyianni, 2022).A glimpse of the possible outcomes may be traced in studies describing teachers' perceptions of computational thinking, the challenges entailed in the implementation of computing and the need for developing sound pedagogical and instructional approaches (Fessakis and Prantsoudi, 2019;Denny et al, 2019;Sentence and Csizmadia, 2017;Royal Society, 2017). Conclusions In conclusion, the findings of the present study argue for the importance of investigating teachers' practical knowledge through the analysis of their beliefs, perceptions and practices towards digital education.As it has been illustrated teachers' practical knowledge is largely neglected in policy reform discourses, yet it is a critical factor determining the size and the range of the gap between what policymakers design and what is realistically implemented in the classroom environment.Thus, any structural and curricular transformations envisaged at the policy level, need to be rooted in data from the classroom field and on teachers' beliefs, understandings and implementation of past and present policies, otherwise, they will carry within them the seeds of their own dismissal. On the other hand, this study attempted to explore teachers' viewpoints through the adoption of a mixed methods design.Despite its many limitations, which are concerned with its non-representative samples, as well as the utilization of nonstandardized research tools, it needs to be noted that the mixture of quantitative and qualitative data has advanced the process of understanding and interpreting findings.Bearing in mind that most of the research in this area is performed through survey methods, it is significant to indicate that the richness of the descriptions and the insight gained from qualitative data is necessary for the conceptualization of survey data. Finally, the study's results support the idea that digital education in all of its curriculum forms and variations is an excessively demanding and perplexing endeavor.The qualifications needed on the part of educators seem among others to involve subjectspecific knowledge requirements, exceptional pedagogical competences, broad knowledge on integrated curriculum design and transdisciplinary instruction.Beyond educators, considerable investments in resources and infrastructure, alignment of policy with school contextual constraints are needed, otherwise, it is difficult to see how any of the visions proposed at the policy level may be realized. Graph 3 : Teachers' views about student assessment in ICT Table 1 : Aspects of the participants' profile of thought was expressed by almost a third of the interviewees who commented that the ICT subject is concerned with the development of knowledge, abilities and skills related to digital literacy and that is what should be expected from primary school children.Programming may be perceived as an interesting activity, yet it cannot constitute the core of the ICT subject, due to children's age and learning readiness."Inprimaryschool, I wouldn't change the subject matter.I don't believe that somebody can teach rigorously computer science or concepts like networking in primary school … (I'm inclined) more towards the use of technologies, various software applications."(Teacher18) "Listen to me, we cannot teach science in primary school children… Digital Literacy.This is what I teach to the children.This is my subject…For me the subject matter is concrete, it's meaningful and dilemmas of the type teaching Word or teaching Excel or teaching programming, are pointless."(Teacher31) They look at the program of studies and ask 'what is ICT?' … (The subject) should drift away from simple use ... It should be more scientific."(Teacher 12) Table 2 : Teachers' self-reported percentages of ICT Curriculum subject matter implementation
2023-06-02T15:19:18.675Z
2023-05-18T00:00:00.000
{ "year": 2023, "sha1": "df87360e6c7ec3d11f8acda5e09e2310e0a7a021", "oa_license": "CCBY", "oa_url": "https://oapub.org/edu/index.php/ejes/article/download/4838/7473", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "414b4d3b8b2adabba75884ffa4d207b908632d2f", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
56657861
pes2o/s2orc
v3-fos-license
A Smart Security System with Face Recognition Web-based technology has improved drastically in the past decade. As a result, security technology has become a major help to protect our daily life. In this paper, we propose a robust security based on face recognition system (SoF). In particular, we develop this system to giving access into a home for authenticated users. The classifier is trained by using a new adaptive learning method. The training data are initially collected from social networks. The accuracy of the classifier is incrementally improved as the user starts using the system. A novel method has been introduced to improve the classifier model by human interaction and social media. By using a deep learning framework - TensorFlow, it will be easy to reuse the framework to adopt with many devices and applications. A. Motivation Modernization is leading to a remarkable increase number of crimes, especially robbery. In the report, the law enforcement agencies throughout the US showed an overall increase of 1.7 percent in the number of violent crimes, which are brought to their attention for the first 6 months of 2015; and, robbery has been increased by 1 percent from 311,936 cases in 2014 [1]. Therefore, Security systems have a crucial role to safeguard people. It is necessary to have a robust system which can distinguish between people and respond differently based on their privileges. A number of methods are available for detecting and recognizing faces with various levels of complexities. Face recognition facilitates automation and security. It has already been used in many applications including ID issuance, law enforcement, border control, and many other commercial products. The state-of-art recognizers using convolutional neural networks (CNN) outperform the human's recognition rate; however, these systems are not automatically improving. Another issue with these systems is that it requires adequate data to be trained before it is actually being deployed. It is essential that the system is robust to recognize people and that the training should be accomplished without much difficulty. And with Google Brain's second-generation system, Ten-sorFlow [2] is a deep learning framework released by Google. TensorFlow is flexible, portable, feasible, and completely open source. It also extensively interacts with different hardware such as smartphones, and embedded computers [3]. With the advancement in mobile, cloud and embedded 1 B. Related Work Home security has been an essential feature in smart home and received a growing interest in recent years [4] [5]. Various home security systems have been used in the market for many pre-potent companies such as ADT [6], Vivint [7], and Protect America [8]. However, none of them have the face-recognition feature in their systems because of moderate confidence and exhausted computational requirements. Along with the rapid development of smart devices, Netatmo [9] presented a device using the deep neural network to realize the face, but their system is still far from expectation. This smart camera does not keep up with the competition as a webcam or a security cam. It is slow at everything; the live feed is lagged; the notifications are delayed, and it takes a while to learn faces. In research problems, there are several security systems using face recognition technology. Facial recognition system for security access and identification presented by Jeffrey S. Coffin [10] uses custom VLSI Hardware and Eigenspaces method, and security systems presented by Shankar Kartik also uses Eigenfaces method for face identification [11] which gives unfavorable results with moderate accuracy. We use a deep learning algorithm for face recognition problems which is closing the gap to human-level performance in face verification. The robustness of face recognition systems depend on the changes in conditions of light or expression or even in the partial blocked of the face can be considered. Several papers have proposed various techniques for face recognition under those conditions. Eigenfaces [12] are variant extracted feature to above factors. Facenet using deep convolutional neural network [13] with the architecture of Inception model [14] from Google and uses a novel online triplet mining method to train instead of an intermediate bottleneck layer. On the widely used Labeled Faces in the Wild (LFW) dataset [15], Facenet system achieved a new record accuracy of 99.63%. However, unfortunately, not only the size of the database increases but also its computational cost increases and recognition accuracy declines accordingly. That is why incremental learning is a learning algorithm which approaches to handle large-scale training data to efficiency and accuracy. A brief definition of incremental learning is that learning is a gradual process with new data. The idea is the existed classifiers that are identified with the new classes to be learned [16]. Its key idea is to begin learning on low-resolution images and then gradually increase to high-resolution image [17]. We use 96x96 pixels cropped as the input data which is also mentioned in Facenet [13] for training data size. Face detection algorithms have been addressed in many papers such as Haar Cascades [18], Kalman Filter [19] and applied in various fields [20] [21] [22]. Besides, OpenCV is a robust open source which supports many methods to detect faces. However, in this paper, we use the Dlib C++ library which supports machine learning algorithms and uses histogram-of-oriented-gradient algorithm [23]. Face detection is not only necessary for the camera node detecting the face but also helpful in pre-processing the input data. In this paper, we also describe a novel method to collect the data from social media by using Facebook API and ask from human interaction to label the unidentified peoples, that directly incremental learn for the neural network model with new data. The interface is also designed with easy uses in many types of devices. II. AN OVERVIEW OF OUR SYSTEM This section describes SoF system experimental architecture inside a smart home. The SoF system consists of a camera node, a cloud server, a smartphone and smart devices for interacting with users. The SoF system is shown in Figure 1. A. Camera Node The camera node uses a Raspberry Pi, a tiny and affordable computer, which is typically placed near the entrance where the access has to be granted. Whenever a person needs access to the house, the camera node will capture a photo, and process it further. The camera nodes are positioned such that it has a wide range of vicinity over the subject and it can detect the face approaching the camera from distance. The camera node first will detect the human face, and then it directly runs the all image processing off-line, locally in a Raspberry Pi by using Dlib library and TensorFlow installed inside Raspberry Pi. Raspberry Pi is a tiny embedded computer with limited power, the training neural network requires expensive computation, so the training task will be done in the cloud node. In addition, the camera node can adaptively be a smartphone or a sensor camera node or an assistant robot because TensorFlow is able to run on many operation systems. B. Cloud Server Cloud computing has found a drastic advancement recently. Cloud computing is a type of computing that relies on sharing computing resources rather than having local servers or personal devices to handle applications. Cloud computing provides a simple way to access servers, storage, databases and a broad set of application services for research over the internet. A face recognition application using CNN requires a lot of computational power machine or computer which will need general purpose GPUs. Cloud computing offers a reliable solution at low cost for such kind of applications. Following on the architecture, the cloud server node will receive data from the camera nodes and save then train the data after collection period. It also interacts with the owner/administrator through a smart device. The server has a database with a record of all users. The server could communicate with the sensor nodes and the smart device using web-socket enabling real-time data processing. The cloud server also has a web-based server collecting data from Facebook and saving all data to storage. Based on the (CoSHE) [24], [25] cloud infrastructure and cloud testbed for research and education [26]. We built the cloud services for SoF system. The cloud server architecture is shown in Figure 2. C. Smart Devices The cloud server could communicate with smartphones and other smart devices such as smart thermostats, lights, and sensors. These smart devices are controlled by the cloud server. The smartphone allows the owner to control the smart devices and also to change the permission level for different users. Based on the granted access level, different users will able to control different smart devices. We demonstrate the capabilities of the system using a miniature smart home as shown in Figure 3 [27]. Whenever a new person is detected, the cloud server sends an alert to the smartphone. The owner can then label the person name or take necessary actions in case of any security breach. Also, smart devices are dramatically growing in recent years, it is much more convenient to control smart devices through IoT (Internet of Things). D. Social Network This is a new approach to collect data. Social networks are the largest free, diversified, adaptive data online. By using the advantage of Graph API from Facebook, we could easily detect a face with a tag name. Then, we could download all picture with user's faces to the cloud node. And most importantly, the facebook developer app is simple and convenient to share between users. They only need to log-in with their account and accept the app to collect pictures. The social network node has three collecting interfaces. First is a public application from Facebook developer website, second is from a web-based hosted on the cloud server, the last one is an application on Android devices. These are easy to collect face images with labeled faces of users who are given access. We also mentioned that data will be used for research purpose and protected sensitive data for users. A. Introduction The human brain makes vision seem very easy. It does not take any difficulty to tell apart a cheetah and a tiger, read a sign or recognizes a human face. But these are really difficult problems to solve with a computer. They only seem easy because human brains are fabulously good at understanding images. In recent years, machine learning has made marvelous progress in solving these difficult problems. In particular, the model called a deep convolutional neural network can achieve reasonable performance on difficult visual recognition tasks which are matching or exceeding human performance in some domains. Researchers have demonstrated reliable methods in computer vision by validating their work in ImageNet [28] -an academic benchmark for computer vision. Subsequent models improve each time to achieve a new state-of-the-art result: QuocNet [29], AlexNet [30], Inception (GoogLeNet), BN-Inception-v2 and Inception-v3 [31]. Inception-v3 is the latest trained model for the ImageNet Large Visual Recognition Challenge from Google. We implemented the face recognition module based on the method presented in Facenet [13] and the training inception-v3 model in TensorFlow [2]. Instead of using Inception (GoogLeNet) model architecture, we use Inception-v3 architecture to train a new model with improved accuracy. B. Architecture Based on the architecture and published model from the Openface [32] and Inception-v3 model citeabadi2016tensorflow. We trained a new model with a new database set. The face recognition architecture is shown in Figure 4. The input data is aligned by using a face detection method and then goes through the deep convolutional neural network to extract an embedding feature. We can use the feature for similarity detection and classification. C. Detection and Affine Transformation When processing an image, face detection (Dlib library) [23] first find a square around faces. Each face is then passed separately into the neural network, which expects a fixedsized input, currently 96x96 pixels as mentioned in Facenet [13], which is the best size giving the highest accuracy and low training time. We reshape the face in the square to 96x96 pixels. A potential issue with this is that faces could be looking in different directions and we have to rotate the images. We use align faces method described in OpenFace [32] by first finding the locations of the eyes and nose with Dlib's landmark detector, and if the face is undetected or unaligned which will be eliminated before going to the neural network. Finally, an affine transformation is performed on the cropped image to make the eyes and nose appear at about the same place as in Figure 5. D. Initial Training Using Data from Social Network Our model trained 2622 celebrities from the VGG-Face dataset [33], 402 people from Facebook and 108 students using the security system. The process of collecting data of users and training the data is somewhat cumbersome. In order to make the training process easier, we obtain the data from different social network accounts. Images of new users are obtained from the social media once the owner requests access for a particular user using the smartphone and new users after using the security system. E. Incremental Learning Often the training data obtained from the social network are insufficient for the deep learning model to perform accurately. Once the user is trained with a minimal dataset from the social media, the representation of the user is further improved by fine-tuning the model as the user starts using the system. Sometimes the face recognition system fails to classify the person properly and will have a very low accuracy, in such case; the system asks help from the owner. The owner is sent a request to label the person through his smart phone. After the owner has labeled, the system will automatically update with the new data and send back to the camera node to give the access. The interface is also built in a website and an android app which are friendly to label and collect data. We use the Triplet loss method mentioned in Facenet [13] for incremental learning. A. System working The system has two processing nodes. The first one is from the camera node as in Figure 6. By using a Raspberry Pi with Pi camera, Raspberry Pi will detect and realize the human face with the current model and data stored in memory. Giving the access or send data to the server is based on if the system is able to detect and recognize the face with set-up confidence. If the confidence is low or unable to recognize the face, the Raspberry Pi will take a series of user's photo with different angles and expression to store in the cloud for training purpose. After the training task done in the cloud, the updated version of the new model will download to the Raspberry Pi. The second process is shown in Figure 7 in the cloud node. Cloud node aimed to store the face data and send alerts to the owner asking for labeling the unidentified person. In addition, the Facebook web-based application is built in the cloud server, which collects the data from social media. The cloud server will do the much exhausted computational training tasks. By using the distributed TensorFlow, we trained the model in multiple computing nodes to speed up the training time and also using the incremental learning technique in Facnet [13] to retrain the model with new data collected from social networks and the security systems after specified time uses. Fig. 7: Cloud node processes. B. Collecting data from Facebook The biggest problem of deep neural network is data. As we mentioned in section III, the VGG dataset is only around 2.6 million images with around 2.6 thousand identities. To compare with Google datasets mentioned in Facenet paper [ [13], they use hundreds of millions of images from Google and Youtube. On the purpose of researching or business, you have to pay for a robust face dataset or manually collecting the data will take a while, and the data are also insufficient. However, as social media becomes more popular around the world, we proposed a novel method to automatically collect the data from social media. In this paper, we only mentioned Facebook since it is the largest social network today, but actually, we can use this method in other social media networks. By using the graph API for developers, we can extract the tag face from the users by giving the login access. We also built and published an application on Facebook which is very convenient for all users around the world who can log-in and share their face images. The working flow of Facebook Graph API is shown in Figure 8. A. Face recognition performance First, we tested SoF model which trained by VGG dataset [33] on Labeled Faces in the Wild (LFW) datasets [15] and the classification accuracy is 0.9318 ± 0.0140. The ROC of SoF model is shown in Figure 9 compared with Human and Eigenfaces experiments. Unfortunately, the model is unable to reach the accuracy mentioned in the Facenet paper [13] since using fewer input data to compare with billion photos from Google. and also we use a different method to preprocess the input data. However, the accuracy is obviously impressed to compare with Eigenfaces algorithm used in Jeffrey's [10] and Shankar's [11] security system. The stateof-art Inception-v3 model gives a marvelous result which closes the human gap. We also tested face recognition in a real environment by testing 20 people, shown in Figure 10. The highest accuracy is 92.2%. These people are presented as new guests coming to the house and asking for a label from the owner. The datasets we used to train the neural network model is American, but 60% of test data is Asian and Latino. The result was lower accuracy to compare with LFW dataset and it sometimes failed to distinguish people. That makes collecting data from Social Network advantages because it is able to collect various, diverse people from around the world. The SoF systems also confused between two people with similar faces, but more face images with different angles and expressions will solve the problem. The light condition is also important, the background should not be too illuminated. The bottleneck values is shown Figure 11 before training to distinguish different faces. After collecting the data from 108 students and 402 users of Facebook, we trained a new model with updated data using incremental learning. The result is shown in Figure 12 with improving AUC from 0.974 to 0.985. Also, the correctly recognize faces of Asian face is increased because we updated the data with diverse images of people from different regions. We would not expect the accuracy will dramatically increase because the data we collected is insufficient and also at the limit of the algorithm. However, by using that incremental learning method, we will reach the accuracy mentioned in the Facenet paper [13]. We have already improved the accuracy by using the Inception-v3 [31] model. If we focus more on pre-processing the input data by aligning the data and using the TF-Slim libraries with the lightweight package for defining, training and evaluating models, we can even improve the performance more. B. Security system setup The entire system was developed and tested in a miniature smart home mimicking the actual smart home [27]. Raspberry Pi is plugged in the front door. It is always running faces detection, then faces recognition locally. The miniature smart home with raspberry pi is shown in Figure 13. The cloud server is a stack server, which is described in section II. The server includes 3 nodes: 1 controller node on the top and 2 compute nodes in the bottom. We actually run several systems in the cloud with different projects, which make it is easy to share the data and information. Our cloud server includes Ubuntu, Cirros, and CentOs operation environments and instances hosted by the QEMU hypervisor [34] on the compute nodes. The cloud server is able maximum to 12 VCPUs (time slot of the processor) with 16Gb Ram and 100Gb Root Disk. With highly computational power, the cloud server is suitable for neural network training and testing. Stack cloud is shown in Figure 14. C. Interface collecting data from social media and owner We developed an android app to alert to the owner/administrator via a smart phone in Figure 15. It also is a web-based server in the cloud server, which is convenient to access anytime, anywhere from any device. The left side is interfaced with the owner, whenever someone tries to access the house, the new data will be updated in the app and on the website as well. The owner will receive a notification, which is labeled name or unknown face. Then the owner can label new users and the system will automatically retrain the classifier model with new users and give them access. On the right side, the Facebook graph API is built into the app with the necessary information from Facebook's database. We collected the tagged faces with face locations and saved to cloud storage. We also gave a permission level for different users which will protect privacy or unsupervised children in Figure 16. For example, the guests do not have access to control the bedroom door, and the kids cannot control the dangerous electric devices or television. VI. DISCUSSION AND FUTURE WORK In this paper, we introduced a new method of obtaining data for training a security system from social media and human interaction. There are several advantages of our system which can be described. First, we should mention that using TensorFlow is adaptive, powerful, and visualizing. Also, training time is acceptable to compare with other frameworks and faster if using distributed TensorFlow. The comparison shows in Table There are abundant interesting projects which are leading in Artificial Intelligent and Deep Learning developed in TensorFlow with huge support from Google . In addition, computation in parallel mode will dramatically drop the training time. By using the method mentioned in the Facenet paper [13], we succeeded in reaching a robust accuracy comparison with other algorithms as in Table 2. More importantly, the accuracy is improving as long as the system is used with new data from social media and human interaction. Second, collecting data from social media is also a beneficial move since social media is the largest public data such as Facebook with around 1.7 billion active users. With the publication of Facebook, we can easily collect the necessary data. We can also collect data from other social networks such as Instagram, Weibo. One interesting direction for future work is to collect the data from the owner's smartphone such as captured images and videos and to train the network automatically. Another direction for future work is to detect fake-face by using gait speed and eye tracking.
2018-12-03T07:57:14.000Z
2018-12-03T00:00:00.000
{ "year": 2018, "sha1": "917b09eeac8554450402972cc8f66cccbbe6d7d2", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "917b09eeac8554450402972cc8f66cccbbe6d7d2", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Engineering" ] }
225965171
pes2o/s2orc
v3-fos-license
title: An Investigation of Groundwater Contamination around Nsukka Municipality Dumpsite using the Resistivity Method An investigation using electrical resistivity method was conducted around a solid waste dumpsite at Nsukka in Nsukka L.G.A of Enugu State, Nigeria to investigate the level of groundwater contamination. Electrical Sounding (VES) and 2-D resistivity imaging were used with a digital read out resistivity meter (ABEM SAS 1000) to acquire data in the area. A total of eight (8) sounding and six (6) 2D resistivity imagings were carried out in the area. A contaminant leachate plume was delineated in 2D resistivity sections as low resistivity zones while the VES shows the depth of aquifer. In 2D pseudosections where bluish colours with low resistivities (less than 20.80Ωm) with the depth ranging from 1.28m to 17.1m in the Line 1 and 2 are seen as contaminated zones. The rest of the lines are not contaminated because of their high resistivities (greater than 20.80Ωm). The result of the electrical resistivity survey also showed 4 - 5 layers geo-electric sections and an AA and AK type sounding curves. The VES result shows that VES 1A, 1B, 2A and 2B which are carried out on line 1 & 2 of the wenner lines showed signs of contamination with low resistivity values less than 20.80Ωm complementing the wenner results. The contamination has not yet got to where the aquifer is located on the lines. Since the depth to the aquifer ranges from 30.26m to 155.43m while maximum depth of contamination is 17.1m. It is believed that the leachate has not percolated down to the aquiferous zones as such aquifers are presumed to be free. Introduction Management of solid waste landfills has been a major problem of our urban centers in Nigeria and other developing economies worldwide. In these urban centers, wastes are generated daily and disposed indiscriminately in rivers and landfills without recourse to the environment, local geology and their proximity to the living quarters. Wastes, which are described as materials that result from an activity or process but have no immediate economic value or demand and must be discarded, have been managed in a manner that has made the quest of the government to positively actualize the mega city status a difficult task. In Nsukka, like in most other cities, wastes are generated daily and most of the wastes are discarded in improperly situated and dumping sites that are not engineered. Most of the dumping sites are located within residential areas, markets, farms, roadsides, and others. This threatens the groundwater and road facilities, not sparing the aesthetics of such affected areas. Unarguably, uncontrolled citing of boreholes as the source of potable water in most of our urban and rural communities as the government seemingly no longer provides the populace with water has become a serious challenge. However, maintaining a portable groundwater supply that is free from microbial and chemical contaminants is far from reality in most of our urban centers, and in particular Nsukka municipality, due to poor waste disposal and management practices. The challenge is worsened by the fact that there are inadequately trained waste disposal personnel and equipment, poor waste collection, sorting and disposal methods, and indiscriminate location of disposal sites without regards to the local geology and hydrogeology of the area. All these contribute significantly to the contamination of soil and ground water. Recent industrial development and increased urbanization in the municipality have resulted to enormous generation of all kinds of waste ranging from municipal to industrial. The type of waste generated varies widely with many human activities located close to dumpsites. During the peak of the rainy season, some part of the dumpsite is covered by flood water and this contributes to the formation of leachate. It is this contaminated liquid that enters into the soil and also eventually into the underlying groundwater at such dumpsites. The manner of disposal points to the fact that solid waste management is one of the greatest challenges facing state and local government environmental protection agencies in Nigeria. Solid waste landfills (SWL) have become a popular waste management system for the disposal of all manner of waste materials in the municipality. As a result of the imminent impact of solid waste landfills, it has become necessary to investigate the potential for the contamination of soil and groundwater around a municipal solid waste landfill. Over the past few decades, growing populations have increased the pressure on natural resources, raising demands for water supply, housing and infrastructure. This pressure can be expected to rise, and combined with environmental stress caused by population. There is a growing need for detailed geological studies connected to environmental protection and infrastructural development (Dahlin, 2001). In developing countries, the lack of resources is the key issue for waste and landfill management. Disposal of waste into open landfill is a cheap method, and it will continue to be the dominant method of waste disposal for the foreseeable future. The most common way of waste disposal in Nigeria and other developing countries is open dumping. Dumping sites are usually uncontrolled, creating considerable health, safety and environmental problems (Pugh, 1999). Maintaining and protecting current water supplies and developing new sources of clean water are essential as modern society expands and civilization continues to develop. Moreover, open waste disposal site often lack reliable geological or artificial barriers, so that leaching of pollutants into the groundwater is a concern, particularly for waste dumped in borrow pits, many of which extend to below the groundwater table. Details on the contents of a dumpsite may be difficult to acquire but are essential for evaluating the level of risk associated with leaking pollutants. In such context, the integrated use of geophysical methods provides an important tool in the evaluation and characterization of contaminants generated by urban residues (domestic and/or industrial), (Soupio et al., 2005 ;Soupios et al., 2006). Among those geophysical methods, electrical methods have been found very suitable for such kind of environmental studies, due to the conductive nature of most contaminants. The use of electrical methods applied to environmental studies is well documented (Karlik and Kaya, 2001;Aristedemou and Thomas-Betts, 2000) Location and Accessibility of Study Area The study area is located behind Old Ikenga Hotel off UNN-Ezeimo road Nsukka, Nsukka Local Government Area of Enugu State, Southeastern Nigeria. The area lies between longitudes 7°21'6.3"E - Before landfilling, the study area was an excavation site and landfilling started in the second quarter of 2011 by open dumping from the hotel management and the residents before it became a permanent dumpsite for Nsukka municipality. Fig. 1 shows a part of the dumpsite and its constituents. Materials and Methods The basic equipment used for this geophysical survey is the ABEM SAS 1000 resistivity meter. The resistivity meter is equipped with a 12 volts battery, two current transmission cables on reels, two potential cables, four metal electrodes and a salt solution. Auxiliary equipment for the survey consisted of a Global Positioning System (G. P. S), to determine the resistivity survey locations and topography, geologic hammers for driving electrodes into the ground, two measuring tapes and cutlasses for clearing traverses. The study involved the use of electrical resistivity method. The methods adopted are Vertical Electrical Sounding (VES) in combination with 2D resistivity imaging. The sounding was used to characterize the various lithologic units and to determine the depth to water table while the resistivity imaging was used to substantiate the result of the sounding as well as to determine the presence of leachate contaminants and the direction of groundwater flow. Results and Discussions A total of eight (8) Vertical Electrical Sounding (VES) using Schlumbeger array and six (6) 2D resistivity imaging using Wenner array were conducted around the dumpsite. The VES field data were processed using the Schlumberger automatic INTERPEX analysis software, which generates model curves using initial layer parameters while the 2D resistivity field data were processed using the RES2DINV inversion software, which subdivides the subsurface into blocks and uses the square inversion to determine the values of each block. The Qualitative Interpretation for VES 1 Sounding curve analysis aims to obtain the equivalent subsurface layering of the apparent resistivity curve. The qualitative interpretation of the profile and depth sounding curve were carried out based on distinctive geoelectric parameters on the number of layers represented by the four types of auxiliary curves (A, H, K and Q). VES I curve type is AA and it has four geoelectric layers. The summary of qualitative interpretation of VES curves is shown in Table 1. Quantitative Interpretation The stations were represented and interpreted as VES 1A to 6A as shown in Table 4.2. VES 1A and 1B were carried out at 25m and 75m respectively on profile 1. The same also goes with 2A and 2B but 3A to 6A were carried out at 45m mark of their respective profiles. The electrical resistivity images of the earth's subsurface obtained in the study area are presented in Fig. 6 -11. The results of the Interpreted 2D Electrical resistivity data are presented in a colour coded format consisting of the Inverted 2D Resistivity structure. The horizontal scale on the section is the lateral distance while the vertical scale is the depths which are both in meters. The resistivity models shown were obtained by the optimization technique of RES2DINV by minimizing the difference between the calculated and measured pseudosections of the apparent resistivity data sets in unison with the result of Kumar et. al., (2009) andUdom et. al., (1999). This is done by plotting apparent resistivity against the pseudo-depth. The contaminated zone resistivities ranges from 0 -20.8Ωm At the surface distances of 7.5m to 10m, 26m to 34m and 53m to 68m, there are indications of contamination as seen in the dark bluish colour in the pseudo -section. This contaminated zone is represented by low resistivity values of 20.8Ωm and has percolated the entire probed depth of 17.1m, where it appears concentrated. Percolation is suspected to be through pore spaces of clayey materials at the top. Notably, Profile 5 is also situated on the upper plains of the site and runs across the site. From the higher plains down to the lower plains. Profile 6: This Profile is located at the Southern end outside the dumpsite. It runs in the West to East direction. It was used as a control line in the cause of this work. Profile 6 was about 100m away from the dumpsite. Subsequently, profile 6 did not show any evidence of contamination. However, a shale body was revealed by the survey with a depth running from 3.85m down to about 14.4m, laterally from 48m to 78m This body was depicted by the resistivity values of 80Ωm, 86Ωm and 90Ωm. Profile 6 depicted the shale nature of the soil surface within the vicinity as was indicated by resistivity values of 51Ωm and 54Ωm. Conclusion Groundwater pollution in urban areas is a growing environmental problem worldwide, especially as many urban areas in Nigeria and beyond depend on groundwater for drinking purposes.The results of the electrical resistivity investigation of solid waste using VES and 2D resistivity imaging at Nsukka in Nsukka Local Government Area of Enugu State, Nigeria has enabled us establish contamination of the subsurface environment and the depth of aquifer. The result of the electrical resistivity investigation of solid waste using VES and 2D resistivity imaging revealed that maximum depth of migration of leachate plume is 17.1m as shown in profile 2 while the minimum depth is from the surface as shown in profiles 1 & 2. In the course of this study six (6) lines were investigated of which 2 were determine to be contaminated (profile 1 & 2). The contaminated zones or leachate plumes were observed to have resistivity value ranging from 1.19 -20.80Ωm and were suspected to be migrating through the fractures, joints and weathered zones in the shaley clay layers that characterized the upper parts as seen in the geoelectric sections of VES. Profile 3, 4, 5 & 6 did not reveal any evidence of contamination from the VES and horizontal profiling resistivity values. This is suspected to be due to the impermeable shale and ferroginized siltstone. Overburdens combined with the gentle sloping topography of the dumpsite as profiles 1 & 2 lie in the lowest plain of the site. VES carried out at the dumpsite showed that the aquiferous zone ranges from 30.26m to 155.43m. Out of the 8 soundings done, 4 indicated pollution at the surface. VES 1A, 1B, 2A & 2B which were done on profiles 1 and 2 showed contaminated surface lithologies. Marked by low resistivity values, with a maximum depth of 6.45Ωm. The geoelectric layers revealed litho -units as shale, clay, sand, siltstone and ferroginized siltstone with sand being the water bearing unit. Consequently, since the depth to the aquifer ranges from 30.26m to 155.43m while maximum depth of contamination is 17.1m. It is believed that the leachate has not percolated down the aquiferous zones as such aquifers are presumed to be free.
2022-05-31T04:07:18.706Z
0001-01-01T00:00:00.000
{ "year": 2020, "sha1": "1b2fa066f57856fc55c630014a6cce0bf7cada0c", "oa_license": "CCBY", "oa_url": "https://www.scienceopen.com/document_file/5baef7b6-617c-46fc-bc39-ac4cb6ca50ea/ScienceOpenPreprint/journal%20article%201.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "1b2fa066f57856fc55c630014a6cce0bf7cada0c", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Geology" ] }
213520644
pes2o/s2orc
v3-fos-license
Variables Creation in Fraud Detection-Based on New York Property Data . With the rapid development of technology, fraud has become an extremely serious problem in the US society, therefore makes fraud detection an urgent need to improve the situation as much as possible. Fraud detection refers to the process of finding anomalies in a huge bunch of data by building fraud algorithms and models that could predict the possible behaviour of the real world situation. This paper will mainly focus on the ‘Variables Creation’ process in fraud detection with the New York property data and discuss how to analyze a fraud problem and build variables based on New York property data. The whole process of fraud detection will render practical help to solve the property fraud problem. Introduction In the US, people can easily commit fraud through various ways. Some steal others' identities to open up multiple accounts, while others intentionally make up some numbers in property data to avoid high tax. In order to catch these frauds, two different kinds of model should be built: unsupervised model and supervised model [1]. These two types of models target two kinds of problem: forensic accounting and real time fraud algorithms. And this paper will only revolve around the Forensic accounting problem. In forensic accounting, the algorithm and variables can be created from all the data, regardless of time flow. The purpose in this case is to find the unusual records in the fixed data in order to catch possible frauds and leave them to the investigator to be checked later [4]. Since the standard to evaluate the goodness of a model is through a combined equation of error and complexity of the model (the lower the equation result is, the better the model creates), in the case, it refers to the fraud detection rate (FDR) which equals the number of actual frauds over the numbers of examined records. To solve this problem, unsupervised modeling is often chosen as the best method due to the absence of labels. Without the previous example of fraud, creating new standards in order to determine which types of records should be regarded as anomalies is of great necessity. Therefore, this paper will cover the whole process of building unsupervised model of 'New York property data', with more detailed analysis to the variable creation part. The conclusion will lead to certain range of data with high fraud scores, which are regarded as our best guess for fraud. Data description The data used in this case is a whole collection of New York property provided by an unknown government organization in 2010. The raw data contains 1,070,994 records and 32 fields, which can be classified into numerical field and categorical field. Here are some of explanation of the main field names: LTFRONT: lot front LTDEPTH: lot depth FULLVAL: full value of the building AVLAND: average value of the land AVTOT: average value of the total area BLDFRONT: building front It is clear to see from these distributions that most of the data settles in the normal range and only a small part of it settles far from the highly distributed value, which are exactly the outliers that the algorithms need to recognize. Data cleaning Right now the data is the raw data collected directly from the official website. The next important process is to do the data cleaning to make the raw data into better form in order to prepare for creating variables from it. Normally, data cleaning process is mainly used to fill the missing value, which can be achieved from two different ways: ① Use the average or most common value of that field over all records to fill the missing fields. ② Select one or more other fields that are important in deciding the missing field and group them into categories, and replace the missing field with the average or most common value for its appropriate group. In this case, both methods are applied to minimize the possibility of measurement bias. Variable chosen Among all the processes of fraud detection, one of the most essential parts definitely is the variable creation part. This is because the final result, also called the fraud score, mostly depending on the goodness of the variables. If the variables are chosen well enough to let the model distinguish the normal data from the strange data, then the algorithms is very likely to be a great success. In contrast, if the variables are not well-chosen, then most likely, no matter how perfect the model is, the result cannot really satisfy the initial expectation. Before variable chosen, what should first be considered is that the purpose of this algorithm, which is the final goal to be achieved. In this case, the goal is to find strange values, so we need to do comparison and sort the unusual ones. Since the data is about property, 'unit value' can be used to accomplish the comparison. According to the fields of the cleaned data, the following formula goes like this: Unit value = Value/(Area or Volume) Here are the fields that involve in Value: Here are the fields that involve in Area or Volume: For each record we create 9 ratios: However, only nine ratios are not enough for variables. Variables should first be created as much as better in order to take all factors into consideration. There is no need to worry about the complexity of the input of the model because later many low-correlated dimensions will be reduced in the later steps. Therefore, here the influence of geographical and logical factors should be taken into consideration, which refers to zip code and tax class. For example, if a building settles in Manhattan, which has the most expensive land price in New York, then it will be very strange if this building has a low land value compared to the average land value of this area with the same zip code. In addition, the reason why to include tax class is that it is generally acknowledged that properties, which pay the same amount of tax should have about the same value. Thus, in this case, we will separately group records into 5 groups: zip5 (the first five numbers of zip codes), zip3 (the first three numbers of zip codes), TAXCLASS, borough, all. With the 9 ratios and 5 groups, 45 variables can eventually be created through mathematical calculations: Z scale According to Mahalanobis distance theory, right now the data shown as variables is like the figure 7, unevenly distributed [3]. However, data need to be transferred into the same scale because it is easier to see the distance of from the center to the point. The result expected is like in figure 8, in which all the data are distributed in a circle that has the same scale. To achieve this goal, the most commonly used method is 'z scale', which refers to the following formula: (1) After the transfer of all the records through this formula, it is automatically to find that the mean value of the data is 0, and the standard deviation becomes 1. This makes the data easier to look at and to do further analysis. Principal component analysis (PCA) Now that the data is perfectly placed in the same scale, but with too many fields, or dimensions, the model can not be perfectly trained. So now reducing dimensions is needed to be done, which often refers to PCA in unsupervised modeling. PCA, principal component analysis, is a kind of mathematical method mainly to reduce dimensions and remove linear correlations [6]. First, assume that every record represents a point in a high dimensional space, so the data will be distributed like this (three dimensions): The data spreads differently in each direction, which means that it has different variance on each dimension. The larger the variance is, the more influential the dimension is to the model. The next step is to rank the data with its variance on each dimension, and the result would be like the following: Afterwards, a decision should be made on how many dimensions to keep and how many to throw away. In this case, we choose to keep eight dimensions that have the high variance. Z scale again After some dimensions are thrown away, the data now is no longer in the same scale. To treat all remaining PCs the same, it is better to do the z scale again to benefit future steps. The method is just the same as the first. Finally, the data is fully prepared for training models. Algorithms building With the prepared data, we can now start to build algorithms. To clarify before actual modeling, one thing should be sure is that the process to build a model is actually the process of finding the best surface in a high dimensional space to fit as much data as possible. When we operate the model with the given testing data, the anomalies can be easily distinguished from the normal ones. And later when the model is put into practical use, the same process would work again to find strange values which are predicted to be fraud scores. As for unsupervised model, there are mainly two methods to score the data, Heuristic Function of the z scores and autoencoder. Heuristic function of the z scores is a basic method that is using some fixed formula to calculate the distance from the center to each point (record), and employ this distance to the fraud score. The general formula is as follows: (2) When n=1, it is the Manhattan distance: D =|z1|+|z2|+...+|zn| When n=2, it is the Euclidean distance: D= In this case, we choose to use the Manhattan distance and record it as score 1. But in most situations, this method cannot achieve good results, so we further employ a more advanced method, Autoencoder. Autoencoder is a special type of Neural Net, one of the machine learning model. Autoencoder is meant for reproducing the data. When the data is normal, it can be reproduced well, but when it is Advances in Economics, Business and Management Research,volume 116 abnormal, the loss can be huge [2]. As a result, the error (distance) is interpreted as the fraud score. The autoencoder model contains many parameters that can be changed artificially, such as the number of layers, the number of nodes on each layer, and the loop. By adjusting these parameters, the model is trained to better fit our data [5]. Figure 13 demonstrates the distribution of the fraud score 2 resulting from the autoencoder. Score combination Right now, the scores in hand are score1, which are acquired from linear model, and score 2, which is obtained from none linear model. To achieve a better model, the next step is to carefully consider the reliability of the two kinds of scores and attach different weight to them as different influence that one will have on the final score. Result analysis By doing the modeling step by step, the result leads to some statistically strange values which usually are interpreted as the best guess to fraud. However, these values still left to be well-defined as fraud by the investigators from companies or banks who expert in translating the data result into real life fraud detection in real world situation. They will use various methods to figure out why the data looks strange and if it is really fraud, and give data scientist feedback on the correctness of the fraud detection. According to their feedback, further adjustment to the fraud model will be made, and the process is just the same from the beginning. Discussion This paper discussed about the detailed process of building the unsupervised model for 'NY property data'. However, due to the limitation of page and knowledge, more detailed background knowledge and practical Python codes can't be shown in the paper. For future research, more in-depth analysis of variable creation and wider application of the unsupervised model can be focused on. It is greatly expected that more researches into fraud detection so that the credit system can be strongly improved. Conclusion This paper discusses the whole process of building unsupervised model for 'New York property data', and pays more attention to the variable creation part. Through this process, it is obvious that creating variables is really essential to the final goodness of the model. When doing this step, careful consideration about the involving factors is in real need, and creating as much variables as possible also exert significance on the accuracy of the further model training. As for choosing models, it is usually better to combine the linear and nonlinear models so that a better final score can be presented without bias. With all the processes discussed above, the final model will work just fine for detecting the fraud in 'NY property data'. However, more different kinds of data would certainly have different requirements for the result. Hence, more kinds of new models or combination method are fully expected to better solve real world problems.
2020-02-20T09:07:20.569Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "f455d6830f5fb32dc6d957672bb6c5ca1800c4a9", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.2991/icesed-19.2020.18", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "34e822e4089082f11c55cb4427875dca1204945e", "s2fieldsofstudy": [ "Computer Science", "Business" ], "extfieldsofstudy": [ "Computer Science" ] }
258987710
pes2o/s2orc
v3-fos-license
Unsupervised Anomaly Detection in Medical Images Using Masked Diffusion Model It can be challenging to identify brain MRI anomalies using supervised deep-learning techniques due to anatomical heterogeneity and the requirement for pixel-level labeling. Unsupervised anomaly detection approaches provide an alternative solution by relying only on sample-level labels of healthy brains to generate a desired representation to identify abnormalities at the pixel level. Although, generative models are crucial for generating such anatomically consistent representations of healthy brains, accurately generating the intricate anatomy of the human brain remains a challenge. In this study, we present a method called masked-DDPM (mDPPM), which introduces masking-based regularization to reframe the generation task of diffusion models. Specifically, we introduce Masked Image Modeling (MIM) and Masked Frequency Modeling (MFM) in our self-supervised approach that enables models to learn visual representations from unlabeled data. To the best of our knowledge, this is the first attempt to apply MFM in DPPM models for medical applications. We evaluate our approach on datasets containing tumors and numerous sclerosis lesions and exhibit the superior performance of our unsupervised method as compared to the existing fully/weakly supervised baselines. Code is available at https://github.com/hasan1292/mDDPM. Introduction Medical imaging (MI) systems play a crucial role in aiding radiologists in their diagnostic and decision-making processes. These systems provide medical imaging specialists with detailed visual information to detect abnormalities, make accurate diagnoses, plan treatments, and monitor patients. More recently, advanced machine learning techniques and image processing algorithms are being utilized to automate the medical diagnostic process. Among these techniques, deep learning models based on convolutional neural networks (CNNs) have exhibited significant achievements in accurately identifying anomalies in medical images [30,12]. However, supervised CNN approaches have inherent limitations, including the requirement for extensive expert-annotated training The difference between the generated image and the given unhealthy image is then calculated to report the anomaly map. No masking mechanism is employed at inference. data and the difficulty of learning from noisy or imbalanced data [11,17,16]. On the contrary, pixel-level annotations are not necessary for unsupervised anomaly detection (UAD) that uses only healthy examples for training. In recent years, numerous architectures have been explored to investigate UAD for brain MRI anomaly detection. Autoencoders (AE) and variational autoencoders (VAE) have proven to be effective in training models and achieving efficient inference. However, their reconstructions often suffer from blurriness, limiting their effectiveness in UAD [3]. To address this limitation, researchers have focused on enhancing the understanding of image context by utilizing the spatial context through techniques such as spatial erasing [35] and leveraging 3D information [6,4]. Additionally, vectorquantized VAEs [26], adversarial autoencoders [7], and encoder activation maps [31] have been proposed to improve the image restoration quality. Generative adversarial networks (GANs) have emerged as an alternative to AE-based architectures for the task of UAD [29]. However, the unstable training nature of GANs poses challenges, and GANs often suffer from mode collapse and a lack of anatomical coherence [3,22]. Recently, denoising diffusion probabilistic models (DDPMs) [15] have shown promise for UAD in brain MRI [34,5,25]. In the context of DDPMs, the approach involves introducing noise to an input image and subsequently utilizing a trained model to eliminate the noise and estimate or reconstruct the original image [15]. Unlike most autoencoder-based methods, DDPMs retain spatial information in their hidden representations, which is crucial for the image generation process [27]. Recent works in medical imaging establish that they exhibit scalable and stable training properties and generate high-quality, sharp images with classifier guidance [33,34,28,25,26]. Further, [5,24,21] introduce patch-based DDPM (pDDPM) which offer better brain MRI reconstruction by incorporating global context information about individual brain structures and appearances while estimating individual patches. Given the advantages observed when applying Mask Image Modeling (MIM) in conjunction with VAE frameworks [32,13,14], such as enhanced generalization capabilities and the acquisition of a comprehensive understanding of the structural characteristics of images, we introduce the first investigation into leveraging Masked Image Modeling (MIM) and Masked Frequency Modeling (MFM) within DDPMs. In our proposed framework, masked DDPM (mDDPM), the need for the classifier guidance is eliminated. The masking mechanism proposed in our framework serves as a unique regularizer that enables the incorporation of global information while preserving finegrained local features. This regularization technique imposes a constraint on DDPM, ensuring the generation of a healthy image during inference, regardless of the input image characteristics. In this study, we focus on exploring three specific variants of masking-based regularization: (i) image patch-masking (IPM), (ii) frequency patchmasking (FPM), and (iii) frequency patch-masking CutMix (FPM-CM). In the IPM approach, random pixel-level masks are applied to patches extracted from the original image before subjecting them to the diffusion process in DDPMs. The unmasked version of the same image is used as the reference for comparison. However, in the FPM approach, the image is first transformed into the frequency domain using the Fast Fourier Transform. Subsequently, patch-level masking is performed in the frequency domain as shown in Figure 2a. The inverse Fourier Transform is then applied to obtain the reconstructed image, which is utilized to calculate the reconstruction loss. In FPM-CM, random patches are sampled from the augmented image generated through FPM and subsequently inserted at corresponding positions within the original clean image as shown in Figure 2b. To evaluate the performance of our method, we use two publicly available datasets: BraTS21 [2], and MSLUB [20], and demonstrate a significant improvement (p < 0.05) in tumor segmentation performance. Method Given the potential occurrence of anomalies in any region of the brain during testing, we introduce data augmentation techniques that involve the insertion of random augmented patches into the healthy input image, z z z ∈ R C,W,H with C channels, width W and height H, prior to the application of DDPM noise addition and removal. This approach allows us to generate a healthy image based on an unhealthy image during inference, facilitating the calculation of the anomaly map. The illustration of our approach can be seen in Figure 1. We will be further discussing our unsupervised mDDPM approach and proposed masking strategies in this section. Fourier Transform We first briefly introduce the Discrete Fourier Transform (DFT) as it plays a crucial role in our mDDPM approach. Given a 2D signal z z z ∈ R W ×H , the corresponding 2D-DFT, a widely used signal analysis technique, can be defined as follows: z(h, w) denotes the signal located at position (h, w) in z z z, while x and y serve as indices representing the horizontal and vertical spatial frequencies in the Fourier spectrum. The inverse 2D DFT (2D-IDFT) is defined as: Both the DFT and IDFT can be efficiently computed using the Fast Fourier Transform (FFT) algorithm, as in [23]. In the context of medical images with various modalities, the Fourier Transform is applied independently to each channel. Additionally, previous works such as [32,8,19] have demonstrated that the high-frequency part of the Fourier spectrum contains detailed structural texture information, while the low-frequency part contains global information. DDPMs In DDPMs, the forward process involves gradually introducing noise to the input image z z z 0 according to a predefined schedule β 1 , ..., β T . The noise is sampled from a Gaussian distribution N (0, I), and at each time step t, the noisy image z z z t is generated as follows: In the denoising process, the objective is to reverse the forward process and reconstruct the original image z z z 0 . The reconstruction is achieved by sampling z z z 0 from the joint distribution: where p θ (z z z t−1 |z z z t ) is modeled as a Gaussian distribution N (µ µ µ θ (z z z t ,t), Σ Σ Σ θ (z z z t ,t)). The parameters µ µ µ θ and Σ Σ Σ θ are estimated by a neural network, and we use a U-Net architecture for this purpose. The covariance Σ Σ Σ θ (z z z t ,t) is fixed as 1−α t−1 1−α t β t I, following the approach in [15]. In this work, we simplify the loss derivation by directly estimating the reconstruction z z z ′ 0 ∼ p θ (z z z 0 |z z z t ,t) as in [5] and using the mean absolute error (l1-error) as the loss function: Instead of performing step-wise denoising for all time steps starting from t = T , as commonly done for sampling images with DDPMs, we directly estimate z z z ′ 0 at a fixed time step t f ix . Masked DDPMs As stated above, we model mDDPM by introducing a Masking block before DDPM stage as shown in Fig. 1 that can be incorporated in three different forms during training, namely IPM, FPM, and FPM-CM. Image Patch-Masking (IPM) In the IPM approach, random masks are applied at the pixel level to patches extracted from the original image. The masked image is then subjected to the diffusion process in DDPMs. In contrast, the unmasked version of the same image is used as a reference for comparison during training. For reference, the output of the IPM block can be observed in Fig. 2b. During training, we sample N patch regions, [p 1 , p 2 , ..., p N ] at random positions such that Σ N n=0 A n < A z , where A n is the area of patch p n , and A z is the area of image, Here, ⊙ represents the Hadamard product. z z z M M M is then fed to DDPM forward process. In the backward process, the denoised image, denoted asz z z 0 , is generated by the network as an estimate of the original image z z z 0 . As we calculate the absolute difference between the original image z z z 0 and the denoised imagez z z 0 0 0 during training, the objective function L M = L rec , where L rec is defined in Eq. 5. By applying random pixel-level masks to the patches, the IPM approach introduces a form of regularization that encourages the DDPMs to generate images that closely resemble the unmasked reference image. means that specific regions or patches within the frequency representation of the image are masked out. Masking in the frequency domain allows for selective modification of certain frequency components of the image while leaving others intact. Here, we don't specify the high-frequency or low-frequency masking, rather MFM is performed randomly. Once the patch-level masking is complete, the inverse Fourier Transform is applied to the modified frequency representation. This transforms the image back from the frequency domain to the spatial domain, reconstructing the modified image. The FPM block is mathematically formulated as, The reconstructed image is subsequently inputted into DDPM, utilizing the identical objective function as described earlier in Eq. 5. Here, ¬ indicates binary inversion, and z z z FC M M M is the FPM-CM block output. We feed in DDPM with z z z FC M M M using the same objective function as used by other approaches mentioned above and stated in Eq. 5. Implementation Details For our experiments, we utilize the publicly available IXI dataset for training [1]. The IXI dataset comprises 560 pairs of T1 and T2-weighted brain MRI scans. To ensure robust evaluation, the IXI dataset is partitioned into eight folds, comprising 400/160 training/validation samples. To evaluate our approach, we employ two publicly available datasets: the Multimodal Brain Tumor Segmentation Challenge 2021 (BraTS21) dataset [2] and the multiple sclerosis dataset from the University Hospital of Ljubljana (MSLUB) [20]. The BraTS21 dataset includes 1251 brain MRI scans with four different weightings (T1, T1-CE, T2, FLAIR). Following [5], it was divided into a validation set of 100 samples and a test set of 1151 samples, both containing unhealthy scans. The MSLUB dataset consists of brain MRI scans from 30 multiple sclerosis (MS) patients, with each patient having T1, T2, and FLAIR-weighted scans. This dataset was split into a validation set of 10 samples and a test set of 20 samples as in [5], all representing unhealthy scans. Thus, our training, validation, and testing data consist of 400, 270, and 1171 samples respectively. In all our experimental setups, we exclusively employ T2weighted images extracted from the respective dataset and perform the pre-processing such as affine transformation, skull stripping, and downsampling as in [5]. With the specifically designed pre-processing techniques, we filtered out the regions belonging to the foreground area so that Masking Block can only be applied to the foreground pixel patches. We assess the performance of our proposed method, mDDPM, in comparison to various established baselines for UAD in brain MRI. These baselines include: (i) VAE [3], (ii) Sequential VAE (SVAE) [4], (iii) denoising AE (DAE) [18], the GAN-based (iv) f-AnoGAN [29], (v) DDPM [34], and (iv) patched DDPM (pDDPM), which feeds patched input to the DDPM. We evaluate all baseline methods via in-house training using their default parameters. For VAE, SVAE, and f-AnoGAN, we set the value of the hyperparameter according to [5]. For DDPM, pDDPM, and mDDPM, we employ structured simplex noise instead of Gaussian noise as it better captures the natural frequency distribution of MRI images [34]. We follow [5] for all other training and inference settings. By default, the models undergo training for 1600 epochs. During the training phase, the volumes are processed in a slice-wise fashion, where slices are uniformly sampled with replacement. However, during the testing phase, all slices are iterated over to reconstruct the entire volume. Further, we conducted all our experiments with a masking ratio randomly varying between 10%-90% of the whole foreground region. Inference Criteria During the training phase, all models are trained to minimize the l1 error between the healthy input image and its corresponding reconstruction. At the test stage, we utilize the reconstruction error as a pixel-wise anomaly score denoted as Λ Λ Λ S = |z z z 0 − z z z ′ 0 |, where higher values correspond to larger reconstruction errors. To enhance the quality of the anomaly maps, we employ commonly used postprocessing techniques [3,35]. Prior to binarizing Λ Λ Λ S , we apply a median filter with a kernel size of K M = 5 to smooth the anomaly scores and perform brain mask erosion for three iterations. After binarization and calculating threshold as in [5], we iteratively calculate DICE scores [10] for different thresholds and select the threshold that yields the highest average DICE score on the selected test set. Additionally, we record the average Area Under the Precision-Recall Curve (AUPRC) [9] on the test set. Results The comparison of our proposed mDDPM with the baseline method is presented in Table 1. It can be observed that our mDDPM outperforms all baseline approaches on both datasets in terms of DICE [10] and AUPRC [9]. In terms of qualitative evaluation, we observe smaller reconstruction errors from mDDPM compared to patched DDPM [5] for healthy brain anatomy, as shown in Fig. 3. It can be observed that, mDDPM (FPM-CM) showcases a higher level of precision in detecting the anomaly, resulting in an anomaly map that is free from foreground noise. Conclusion This study introduces an approach for reconstructing the healthy brain anatomy using masked DDPM, which incorporates image-mask and frequency-mask regularization. Our method, known as mDDPM, surpasses established baselines, even with unsupervised training. However, a limitation of the proposed approach is the increased inference time associated with the diffusion architecture. To address this, future research could concentrate on enhancing the efficiency of the diffusion denoising process by leveraging spatial context more effectively. Further, we intend to explore the Masked Diffusion Transformer architecture in our future studies where we can incorporate a latent modeling scheme using masks to specifically improve the contextual relationship learning capabilities of DDPMs for object semantic parts within an image.
2023-06-01T01:16:03.078Z
2023-05-31T00:00:00.000
{ "year": 2023, "sha1": "78da59df6a1f59b14862bdf72f0e4c07bb18b343", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "78da59df6a1f59b14862bdf72f0e4c07bb18b343", "s2fieldsofstudy": [ "Computer Science", "Medicine" ], "extfieldsofstudy": [ "Computer Science", "Engineering" ] }
268448791
pes2o/s2orc
v3-fos-license
Mechanism of Seebeck coefficient variation at the output of NiCr/NiSi thin film thermocouple with different wires In this paper, by using magnetron sputtering to prepare NiCr/NiSi thin film thermocouples, the static calibration method is used for NiCr/NiSi thermocouples with rapid temperature calibration experiments. Different temperature calibration curves are obtained. The Seebeck coefficient of NiCr/NiSi thin film thermocouples connected to NiCr/NiSi wires is significantly higher (41.39 μV/°C) than that of NiCr/NiSi wires (0 μV/°C). The Seebeck coefficient (41.39 μV/°C) of the NiCr/NiSi thin-film thermocouple connected to copper wire is significantly higher than that of the thermocouple connected to copper wire (0 μV/°C). The problem of the Seebeck coefficient of the K-type thermocouple is analyzed by experimental data, which provides the relevant parameter basis for the use of the K-type thermocouple. The method has the advantages of simple equipment, convenient operation, accurate and reliable data, and provides a basis for the sensor to measure the temperature measurement. Introduction Current research on thin-film thermocouples mainly focuses on improving the preparation methods and expanding the application areas.However, regardless of the preparation method and the application of thin-film thermocouples, their temperature measurement principle is based on the Seebeck effect [1][2][3][4][5].Therefore, the use of thermocouples to accurately measure transient surface temperatures presupposes the ability to accurately calibrate the Seebeck coefficient of the thin film thermocouple.The static calibration process for ordinary filament thermocouples involves placing them in a metering furnace with a standard platinum resistor, fixing the temperature of the cold end and keeping the measuring end constant at different temperatures, and establishing the thermocouple's thermopotential-temperature relationship by measuring the calibration temperature from the standard filament thermocouple [6][7].When this method is used to calibrate the tiny-size thin-film thermocouple, the entire sensor is placed in the homogenized temperature field of the metering furnace.The temperature gradient is mainly concentrated in the compensation wire, which is inconsistent with the temperature distribution of thinfilm thermocouple applications, and it is not possible to realize effective calibration, which affects the temperature measurement accuracy of thin-film thermocouples in practical applications [8][9].As the NiCr/NiSi thermocouple connected to a different line Seebeck is also different, the thermoelectric properties of the electric potential difference are still very large.The calibration in this paper analyzes the above problems well.Only by analyzing the static calibration method of thin-film thermocouple in detail can we accurately evaluate its static characteristics so as to further improve the accuracy of temperature measurement. Thin film thermocouple structure design When designing the NiCr/NiSi thin film thermocouple temperature, first of all, the structure of the thin film thermocouple temperature sensor is simple, and the preparation process is easy to realize, which can meet the needs of the sensor and calibration.The NiCr/NiSi thin film thermocouple temperature has a sensitivity and fast response speed, which can meet the requirements of temperature testing [10][11][12][13][14].The connection between the thin film thermocouple temperature thermal electrode material and the compensation wire is stable and reliable, will not fall off easily, and can stably transmit the temperature signal.Based on the above principles, the NiCr/NiSi thin-film thermocouple temperature design uses a glass substrate with a rectangular shape, as shown in Figure 1.For the shape of a thin-film thermocouple, thermal electrodes are prepared in this paper using a mechanical mask. Preparation of NiCr/NiSi thin film thermocouple NiCr/NiSi thin-film thermocouple was prepared as follows: the polished mask, fixture, and glass substrate were cleaned with acetone, ethanol, and deionized water for 20 minutes, respectively, and blown dry with nitrogen.The mask is positioned and mounted on the fixture and fixed on the sample tray with high-temperature tape.The mask should be mounted carefully to tightly bond the mask to the substrate to avoid diffraction.When the vacuum degree of the vacuum coating chamber reaches 1.0×10 - 3 , the vacuum chamber of the vacuum coating equipment is closed.When the vacuum degree of the vacuum coating chamber reaches 6.0×10 -3 , the air pressure of the vacuum coating chamber is adjusted to 0.6 Pa.After the pressure in the vacuum chamber is balanced, the power switch is activated, and at the same time, the baffle is turned on for pre-sputtering for 5 minutes in order to eliminate the impurities on the target material and to ensure the stability of the sputtering process.After pre-sputtering, the target baffle was closed, and the sputtering of NiCr functional films was formally initiated.After the sputtering of the NiCr functional film is finished, it is necessary to gradually close the gas valve molecular pump, the insert valve, and the mechanical pump, and then wait for the completion of the venting and cooling, and finally remove the sample.The mask and target material are replaced to sputter NiSi and SiO2 in turn.The sputtering parameters are shown in Table 1 [18], and the operation procedure is the same as that for NiCr.Sensor performance testing is an important part of the sensor production and commissioning process [15].In order to ensure that the developed sensors can be used in industry, scientific research, and other places to obtain accurate, reliable, stable, and rapid measurement results, in sensor manufacturing, packaging must be tested after the performance parameters.Static characteristics are important performance indicators of thin film thermocouple temperature sensors [16].The NiCr/NiSi thin-film thermocouple temperature sensor developed in this paper is a non-standard sensor.In this paper, the developed NiCr/NiSi thin film thermocouple temperature sensor is calibrated using static calibration of laboratory equipment.As can be seen from Figure 3, the equation between the thermopotential E sensor and the temperature ș is: E = 0.04139ș + 0.04139.By analyzing the results of the static calibration experiments, it can be concluded that the developed NiCr/NiSi temperature sensor is connected to the NiCr/NiSi wire with a Seebeck coefficient of 41.39 ȝV / ႏ. Static calibration of thin-film thermocouples with copper leads. In the same way, the developed NiCr/NiSi thin-film sensor is placed in the hot end.The compensation wire connected to the copper wire is placed in a freezing-point constant thermostat.0 ႏ constant is set.The static calibration temperature range is set as 50 ~ 400 ႏ.The dry metering furnace temperature is maintained at every 10 ႏ rise 5 min constant temperature for calibration to obtain data.The expression obtained by fitting the data is shown in Figure 4: As can be seen from Figure 4, the thin film thermocouple thermoelectric potential E and the temperature ș of the equation between E = 0.By analyzing the results of the static calibration experiments, it can be concluded that the developed NiCr/NiSi thin-film sensor is connected to the copper wire and has a Seebeck coefficient of 0 ȝV / ႏ. Experiments and results Through the previous study, this paper found that the Seebeck of NiCr/NiSi thin film thermocouple connected to NiCr/NiSi wire (41.39 ȝV/°C) is significantly higher than that of NiCr/NiSi thin film thermocouple connected to copper wire (0 ȝV/°C).Detailed comparison data are shown in Figure 5.This is because no electric potential is generated between the copper compensation wires.Therefore, the connection to the electric potential of the copper wires is provided only by the film thermocouple and the hot and cold ends of the film thermocouple are in the same temperature field, so the Seebeck coefficient is 0 ȝV/°C. Through the NiCr/NiSi thin film thermocouple connected to the wire is not the same as analyzing the Seebeck coefficient value.It can be reliably applied to temperature measurement, smart tools, smart manufacturing, and other fields. Conclusion In this paper, a NiCr/NiSi thermocouple is connected to different wires with different Seebeck.Using rapid temperature calibration experiments, different temperature calibration curves are obtained.The study of NiCr/NiSi thin film thermocouple connected to the NiCr/NiSi wire Seebeck coefficient (41.39 ȝV / ႏ) is significantly higher than that of NiCr/NiSi film thermocouple connected to copper wire (0 ȝV/°C).The problem of the Seebeck coefficient of the K-type thermocouple is analyzed by experimental data, which provides the relevant parameter basis for the use of the K-type thermocouple.The method has the advantages of simple equipment, convenient operation, accurate and reliable data, and provides a basis for the sensor to measure the temperature measurement.Using the comparative method of NiCr/NiSi thermocouple connected to a different line, Seebeck was analyzed in detail for accurate measurement of temperature.The study of tool wear, improved machining accuracy, machining efficiency, and tool durability is of great significance. 2. 3 . 1 Calibration system design.Temperature sensor static calibration refers to the temperature sensor in the temperature field that reaches the static standard conditions and determines the linear relationship between the sensor input and output[17].The aim is to obtain the developed sensor's Seebeck coefficient, linearity, repeatability, and other indicators.The temperature sensor used in this paper mainly consists of FLUKE-9144 dry metering furnace, PLANCK-6190A type icemaker, and DMM7510 digital multimeter.The thin film thermocouple temperature static calibration schematic system is shown in Figure2[19][20]. Figure 3 . Figure 3. Static calibration curve of thin film sensor. Figure 4 . Figure 4. Static calibration curves for thin film thermocouples. Figure 5 Figure 5 Static calibration curves for wo outlets of a thin-film Sensor.
2024-03-17T15:06:59.265Z
2024-03-01T00:00:00.000
{ "year": 2024, "sha1": "7637ebfa53983a2b0c97c086d6596a6f2b7804d1", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/1742-6596/2724/1/012001/pdf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "163c5545b30b157df24d4b69412bedbab296d154", "s2fieldsofstudy": [ "Materials Science", "Engineering", "Physics" ], "extfieldsofstudy": [ "Physics" ] }
259173163
pes2o/s2orc
v3-fos-license
EXERCISES AND NEUROMUSCULAR ELECTRIC STIMULATION FOR MEDIAL LONGITUDINAL ARCH: CLINICAL TRIAL ABSTRACT Objective The extrinsic muscles, such as the posterior tibialis and long flexor of the hallux and the intrinsic of the foot, are part of the active subsystem of the central system of the foot and play an essential role in the control of the medial longitudinal arch resulting from difficulty in contracting the muscle, neuromuscular electrostimulation (NMES) becomes a resource combined with strengthening and recommended for rehabilitation. T this work aims to evaluate the effectiveness of NMES associated with exercise in deforming the medial longitudinal arch. Methods This is a randomized blind clinical trial. 60 asymptomatic participants were divided into three groups: NMES, exercise and control. The NMES and exercise group performed seven exercises for the intrinsic and extrinsic muscles twice a week for 6 weeks, and the NMES group used an NMES associated with five exercises. Navicular height and medial longitudinal arch angle were taken before and after the intervention period. Results No statistically significant differences existed between groups for navicular height and medial longitudinal arch angle. Conclusion NMES associated with exercise does not change the characteristics of the medial longitudinal arch in association with asymptomatic. Level of Evidence I; Randomized clinical trial. INTRODUCTION The main structure of load bearing and shock absorption of the foot is the medial longitudinal arch. Changes in medial longitudinal arch can affect the foot biomechanics, change the distribution of plantar loads in individuals with injuries in their feet or in any other joints, and cause pain. [1][2][3] The foot core system is a paradigm for understanding the medial longitudinal arch functionality that compares it to the spine stability. There are three subsystems in this theory: the passive, including the foot bones, plantar fascia and ligaments; the neural, with muscle and tendinous receptors, local and global, in ligaments and on the plantar skin; and the active, with intrinsic Figure 1. Stances for the proposed exercises for intrinsic and extrinsic muscles of the foot. 1: initial (A) and final (B) stances for posterior tibialis muscle exercise; 2: initial (A) and final (B) stances for the long flexor of the hallux exercise; 3: stance for the short-foot exercise with bipedal support while sitting; 4 and 5: stance for the short-foot exercise with unipedal support while sitting (picture showing the support on the right foot); 6: stance for exercise for intrinsic and extrinsic muscle while standing and using a rubber band; 7: stance for the single-leg exercise for intrinsic and extrinsic muscle with unipedal support. muscles, local and extrinsic stabilizers, that are essential for the foot global movements. 4 Several types of exercise have been proposed to increase the muscle activation with a focus on the active contribution of medial longitudinal arch. However, these are muscles difficult to feel and contract. 5,6 If muscles regulate the deformity and stiffness of the medial longitudinal arch, there would be a possibility that the electric stimulation applied to intrinsic muscles could affect the natural contraction ability, resulting in increasing of the height and decreasing of the medial longitudinal arch length. 7 The idea of stimulating the medial longitudinal arch with neuromuscular electric stimulation (NMES) as a way to activate these muscles seems reasonable and logical. Our objective was thus to evaluate the effect of NMES and of NMES plus exercising on anatomical changes of the medial longitudinal arch. Trial design This parallel randomized controlled trial involved sedentary adults (not practicing any physical activity) without foot pain who were evaluated in a physiotherapy service of a university hospital between January 2017 and March 2018. The protocol was approved by the institutional review board (62766716.4.0000.5479) and registered at clinicaltrials.gov (NCT03117244). All participants signed informed consent forms. We report this trial according to the CONSORT Statement of Randomized Trials, especially the extension for Nonpharmacologic treatments 8 and TIDieR reporting guideline. 9 Participants We recruited participants for this trial through the institutional and the researchers' personal social medial channels. We invited them to come to our physiotherapy clinic for the initial screening, which included personal health history and demographics and basic anthropometry and physical examination, as well as exercising habits. We excluded individuals reporting neurological diseases, and any foot or leg fracture, muscular or joint injury or surgery in the previous 12 months. We also excluded participants with previous or current rigid flat foot or valgus calcaneus higher than 10 degrees. Interventions and groups We allocated participants into three comparison groups: the Exercise, the NMES and the Control groups. In the Exercise group, participants received individual training twice a week for six weeks, at each participant's most convenient time (morning or afternoon). In the NMES, they received the same exercises plus electric stimulation as described below, twice a week for six weeks, also at the most convenient time for the individual. Participants randomized to the control group were examined and then told to keep their routines and activities of daily life. We just asked them to come back to the service in six weeks for a new evaluation. The participants in the Exercise group performed a total of seven movements as described in Figure 1 and Table 1. Figure 1 shows the stance for each exercise and the muscles activated in the movements. The same exercises (intensity and duration) were proposed for all participants in this group, with no modifications according to anthropometry. In the NMES group, during the exercises numbered 1 to 5 (as shown in Figure 1), participants also received electrical stimulation to the foot. We applied the depolarized, biphasic, symmetrical current with rectangular pulses of medium frequency modulated in low using a pulse generator (Sonophasys, EUS.0503, KLD Biosistemas, São Paulo, Brasil), and two self-adhesive silicone electrodes (Self-Adhesive Electrode Valutrode 5x5cm, Arktus, Santa Tereza do Oeste, Paraná, Brazil) placed in the region of the muscular belly of the flexor halluci, posterior tibialis and muscles intrinsic of the foot. The carrier frequency was 2500Hz, the modulation frequency was 50Hz, with an output duty cycle of 20%, one second up and down ramp and an on-and-off time with a 1:1 ratio, with the on proportional to the expenditure to perform the series of exercises. The same physiotherapist administered the interventions (the exercises and neuromuscular electric stimulation) for all participants in both groups. Evaluations and outcomes For two weeks, we trained an independent physical therapist (author RDPA), with five years of experience, to perform the evaluations for this study. In the training we focused on anatomical structures palpation, identification of reference points and the measurements to be taken. After training, the physical therapist evaluated 10 healthy volunteers as a pilot study, in two occasions with a one-week interval, and we registered these measurements. We calculated the intraclass correlation coefficient between the two measurements of the same individual, presetting the rule that a coefficient lower than 0.4 would not be acceptable. 10 The evaluator took the basic demographic and clinical history of the included participants. Then, he measured the angle of the calcaneus, with the patient lying in prone position, with feet off the gurney. He palpated the calcaneus medially and laterally and bisected it, marking its lower and middle points, to form a line between the points. This way, he identified the subtalar neutral. With palpation of the talus, he measured the varus or valgus of the calcaneus using a plastic goniometer with protractor and two 20cm rulers (SH5205, Carci, São Paulo, Brasil). 11 The therapist then asked the participant to sit, with hips, knees and ankles flexed at 90 degrees, and identified other anatomical points with a marker: the center of the medial malleolus, the tuberosity of the navicular and the head of the first metatarsus. Next, he palpated the lateral and medial aspects of the talus, with the subtalar joint in neutral position and measured the medial longitudinal arch angle and the navicular height. The therapist repeated these measurements with the participant standing with bipedal support, with the subtalar in a relaxed position. 11 To measure the medial longitudinal arch angle, the evaluator placed the center of the goniometer in the tuberosity of the navicular, with its ends facing the center of the medial malleolus and the head of the first metatarsus. 12 For the navicular height, he measured the distance (in centimeters) between the ground and the tuberosity of the navicular. 12 All measurements were made in both feet of each participant by same evaluator. Randomization and blinding The author DMGN performed the randomization for this study using a list from the randomization.com (website). We generated a randomization sequence for 60 participants initially using the first and original generator that uses the method of randomly permuted blocks. When the participant arrived for the preliminary evaluation for inclusion, if the individual was considered eligible and consented to participate, DMGN consulted the list and warned the physiotherapist about the allocation. In this trial, due to the nature of the interventions used, it was not possible to blind participants: they all knew what intervention they were receiving or not. The physical therapist who administered the interventions, guiding the exercises and applying the electric stimulation. We asked participants to hide the allocation from this evaluator (i.e., not telling him if they performed exercises or not, for example). Sample size and statistical analysis We calculated sample size (ANOVA) and data from the pilot study (during the evaluator training). We adopted a significance level of 5%, power of 80% and the navicular height as the primary outcome, considering as significant a minimum of 20% of difference between means, with a standard deviation of 0.75. According to these assumptions, the sample size should be of 16 participants per group. Assuming some loss, we worked with a sample size of 20 participants per group. For the intraclass correlation coefficient (ICC) calculation, for the pilot study, we determined the standard error measurement with standard deviation between the first and the second measurements, with the standard deviation multiplied by the square root (1 -ICC). 13 We compared the study evaluations between groups and between moments (before and after the intervention). For this, we used the Shapiro-Wilk normality test to verify distribution. We described the measurements using medians, minimum and maximum values and used the Kruskal Wallis test for the non-parametric observations. For the parametric observations, we used means, standard deviations and the ANOVA test. The level of significance adopted for all tests in this study was 5% and the software was SPSS version 13.1. RESULTS In the study period, we recruited 60 participants, and 50 of these completed the follow-up, as shown in Figure 2. The reason for dropouts in the intervention groups was schedule conflicts with work or personal appointments. Table 2 shows anthropometric evaluations and the similarity between groups at baseline. For the pilot evaluation, the ICC and the SEM between measurements were 0.98 cm and 0.15 degrees for navicular height and medial longitudinal arch angle respectively in the neutral position of the subtalar, and 0.98 cm and 0.11 degrees for the relaxed position, as well as 0.97 and 0.02 cm for neutral position of the navicular height, 0.92 and 0.06 cm for the navicular height for the relaxed position. This means the variation was acceptable. The medial longitudinal arch angle and navicular height measurements (respectively on Tables 3 and 4) show that neither exercise nor electric stimulation resulted in significant outcome changes. DISCUSSION In this randomized controlled trial, exercising only or with electric stimulation did not result in any difference in the medial longitudinal arch measurements. To our knowledge, this is the first randomized trial using NMES and exercises assessing the changes in the medial longitudinal arch. Typical values for navicular height were between 3.6 and 5.5 cm and 130 and 152 degrees for medial longitudinal arch angle in a study in Denmark. 12 Our participants had values within these ranges both before and after exercising and electric stimulation, indicating that, if any, the effects of the intervention were not evidenced by anatomical changes. Short-foot exercises can reactivate muscular components of the core system that may be inactive, allowing these muscles to contribute to the absorption and propulsion during activities involving the foot, 6 such as walking and standing. Mulligan et al observed improvements of the medial longitudinal arch and the dynamic balance of the foot after four weeks of intrinsic muscle at-home training. 14 Hashimoto et al also evaluated the effects of strength training for the intrinsic flexor muscles. The authors measured the medial longitudinal arch length and transverse arch of the foot, after an eight-week program with 200 repetitions a day, three times a week, with a load of three kilos. They observed increased strength and decreased length of the arches. 15 However, both were before-and-after studies, with no control group. 14,15 The motivation for this study was the lack of properly conducted randomized clinical trials evaluating the value of adding electrical stimulation to exercise in the rehabilitation or freeing of the core foot. 4,6 Kelly et al thought about the possibility of using a direct current to stimulate the hallux abductor, short finger flexor and plantar square. The authors observed transient changes in navicular height and medial longitudinal arch angle through 3D kinematics, which probably fired the intrinsic muscles to control stiffness and deformation of the medial longitudinal arch. The experiment, however, was small, with nine healthy males, and with no control group. 7 Recently, Ebrecht et al 16 conducted a randomized trial on the effect of an NMES intervention on intrinsic foot muscles cross-sectional area as a proxy for muscle strength. The authors aimed to verify if NMES would change the cross-sectional area as measured by ultrasound, improve arch stability and reduce muscle fatigue. The measurements were made after 20 minutes of running in a treadmill, barefoot, for all participants (except the passive control group), with subgroup analysis for experienced or beginner runners. Arch stability and fatigue were evaluated through the static navicular drop. No strengthening effect was verified of the intrinsic foot muscles using NMES. However, there was little information on the NMES parameters of application and the authors themselves questioned if the intervention had been too short or the cross-sectional area and the navicular drop would be suitable to display muscle strength. The small sample size, especially for subgroup analyses, is a concern too. The authors suggested that a study with people who do not exercise was needed. Using NMES in healthy muscles is a controversial issue in the literature, but studies have investigated adding NMES with exercise for muscles of the leg, some with positive results, 17 others without. 18 One explanation for the failure of NMES in these studies would be that in general they used participants with no neural or mechanical impairments whereas in a physical rehabilitation context of injured muscles or wasting or denervation following periods or immobilization maybe it could have detectable effects. 19 We opted, thus, to choose a simple and basic measurement, possible to be performed without special equipment, and an area of the body not explored by well-conducted and reported RCTs, the medial longitudinal arch of the foot. A limitation of our study would be that we did not classify the different types of feet (normal, pronated and supine) at baseline. However, we do not have data on the prevalence of foot pronation in our population, and the only reference data for "normality" available are based on populations that differ substantially in ethnicity and anthropometry 12 from ours. The participants in our study intervention groups trained twice a week, but the literature is controversial as to the ideal frequency of exercises to gain muscle strength. A recent systematic review with meta-analysis with subgroup analysis found that the training frequency produced better results for multiarticular exercises, training of upper limbs, for young adults and for the female sex. No significant association was found between the frequency and the strength gain for uniarticular exercises and training of the lower limbs for a male, middle-aged and elderly population. 20 Again, this shows that the disparity of training protocols, and not the NMES per se, could be responsible for the lack of effects we found. We studied young adults, with 74% of females, twice a week, but there is no evidence that an increase in exercise frequency would help. Future studies should focus on the motor control of the muscles involved, that is, they must be active at the moment of the support and impulse phase of walking and running and interventions must be focused on this. The medial longitudinal arch should be the focus of investigations, including static and dynamic deformation. The strengthening of medial longitudinal arch muscles should be studied in symptomatic patients for foot and ankle disorders. However, the outcomes of the work must be better designed, analyzed and reported by researchers, allowing the comparison between protocols. CONCLUSION NMES associated with exercise does not change the characteristics of the medial longitudinal arch in association with asymptomatic.
2023-06-17T05:07:12.232Z
2023-06-09T00:00:00.000
{ "year": 2023, "sha1": "70cddacda1692053fecd7e87c69be01d24a13db5", "oa_license": "CCBY", "oa_url": "https://www.scielo.br/j/aob/a/jTn3FD7px7sRHTVQ69SPZjQ/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "70cddacda1692053fecd7e87c69be01d24a13db5", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
119223587
pes2o/s2orc
v3-fos-license
Probing beyond the laser coherence time in optical clock comparisons We develop differential measurement protocols that circumvent the laser noise limit in the stability of optical clock comparisons by synchronous probing of two clocks using phase-locked local oscillators. This allows for probe times longer than the laser coherence time, avoids the Dick effect, and supports Heisenberg-limited measurement precision. We present protocols for such frequency comparisons and develop numerical simulations of the protocols with realistic noise sources. These methods provide a route to reduce frequency ratio measurement durations by more than an order of magnitude. I. INTRODUCTION Optical clock measurements are the most stable measurements of any kind [1][2][3], driven largely by recent progress in ultrastable lasers [4][5][6]. Still, laser frequency noise limits the stability of frequency comparisons well short of the limits imposed by atomic coherence [7], and has so far prevented the use of Heisenberg-limited measurements that realize a quantum enhancement in measurement stability [8,9]. High-stability optical clock comparisons are critical for the future redefinition of the SI second [10,11] and provide a key measurement tool for the parameters of fundamental physical theories [12][13][14], as well as relativistic geodesy with high spatial and temporal resolution [15][16][17]. While there has been a lot of recent progress both towards improving the frequency stability of clock lasers and developing measurement protocols aimed at circumventing clock laser noise using multiple atomic ensembles [18][19][20][21], it is likely that for the foreseeable future optical clock stability will continue to be limited by local oscillator noise. It is important to recognize, however, that none of the clock applications mentioned above require good absolute (i.e., single) clock stability. For two clocks operating at the same frequency, it is possible to have better clock comparison stability than absolute clock stability. For example, clock comparison instability due to the Dick effect [22][23][24] can be circumvented by synchronous interrogation of two atomic ensembles with a single local oscillator (LO), which has been demonstrated for microwave [25][26][27] as well as optical clocks [28,29]. A related technique uses a single clock laser to simultaneously probe two clock atoms and derives an error signal from correlations in the transition probabilities between the two [7,30], allowing the probe time to extend beyond the laser coherence time. Here, we expand these ideas to the more general case of frequency comparisons between clocks operating at different frequencies. We take advantage of the fact that the relative phase between two local oscillators, even if they are separated by optical frequencies, can, in general, be stabilized more pre- * david.hume@nist.gov cisely than the absolute phase [31,32]. We show that, by using phase-locked LOs and synchronous probing of multiple clocks, optical clock comparisons can operate near the limits imposed by atomic coherence and achieve Heisenberg-limited performance even in the presence of laser noise. In what follows, we consider the use of this technique in several relevant regimes of optical frequency measurements, distinguished primarily by the number of atoms in each of the two clocks. We compare the achievable stability in these measurements to what can be achieved in a typical measurement protocol with independent LOs and asynchronous probing, but otherwise identical clock parameters. First, we introduce the analytic (Sec. II) and numerical (Sec. III) calculations, focusing on the case when the projection noise of clock 1 is much lower than that of clock 2. This is relevant, for example, in a frequency comparison between a single-ion clock and a optical lattice clock. In Sec. IV, we extend this discussion to the case where a small number of trapped ions are prepared in an entangled state. In Sec. V we further extend our protocol to the case when both clocks have many atoms, as would be true, for example, in a measurement between two optical lattice clocks. In all cases we find a significant improvement in the measurement stability in the presence of realistic LO noise compared to the usual measurement protocol with independent clocks. II. ANALYTIC ESTIMATES OF CLOCK STABILITY The standard quantum limit (SQL), also known as the projection noise limit, for an atomic clock using Ramsey spectroscopy on N uncorrelated atoms can be written as where ν is the atomic transition frequency, T is the Ramsey probe duration, and τ is the total measurement duration [33]. Local oscillator noise constrains clock stability by limiting T to some fraction η of the LO coherence time [34], which is often much shorter than the atomic coherence time. If the LO noise is predominantly flicker with a fractional frequency instability σ L , Optical clock comparison with phase-locked LOs. (a) A cavity-stabilized laser simultaneously probes clocks at two different frequencies, which are phase locked via a frequency comb and active path-length stabilization. (b) Timing diagram of a near-synchronous Ramsey experiment. The phase φ est 1 measured at clock 1 is used to correct the laser phase before the final pulse of the second clock. (c) Transition probabilities as a function of φ1 for clock 1 (red) and clock 2 before and after applying the feed-forward phase (blue dotted and blue solid lines, respectively). The size of the projection noise for the two clocks is denoted by the thicker lines, and the distribution of laser phase noise is depicted as the gray region. we optimize the stability of the atomic clock by choosing T = η/(νσ L ). In a typical frequency ratio measurement two LOs are stabilized independently to two atomic ensembles, and their frequency ratio is measured using a frequency comb. The clock stability is optimized on clock j by maximizing the probe duration T j while ensuring that the relative phase between the LO and the atoms, given by does not exceed the range [−π/2, π/2], wheref j is the mean frequency of LO j during the probe duration. The measurement variance in the frequency ratio is just the sum of uncorrelated contributions from the two clocks as described by Eq. (1). Now consider the case that clock 1 and clock 2 are probed simultaneously with phase-locked LOs (see Fig. 1), so that their frequencies f 1 (t) and f 2 (t) are related exactly by a known ratio β = f 2 (t)/f 1 (t), and the noise in the phase measurements is correlated. The phase evolution of clock 2 during the probe can be written as where ǫ = ν 2 /ν 1 − β is the current error in the frequency ratio measurement. The first term in Eq. (3) correlates the phase measurements on the two clocks and will dominate φ 2 when the static ratio ν 2 /ν 1 is sufficiently well known. In the presence of this correlated noise, information from the measurement of clocks 1 and 2 can be combined to relax the restriction |φ j | < π/2, such that one or both clocks can be operated beyond their laser coherence time. We illustrate this idea by considering a comparison between a clock with N 1 atoms at frequency ν 1 and a single-atom clock (N 2 = 1) at frequency ν 2 > ν 1 . This describes, for example, the comparison between an optical lattice clock and a single-ion clock. For a typical (asynchronous) clock comparison, with N 1 ≫ ν 2 /ν 1 and no dead time in either clock, the measurement noise is dominated by the projection noise of clock 2. This is limited by the condition T 2 = η/(σ L ν 2 ), such that (∆β/β) 2 ≃ σ L /(4π 2 ην 2 τ ). With simultaneous probes and phase-locked LOs, the measured value of φ 1 can be used to unwrap the measured value of φ 2 via Eq. (3) and extend the clock 2 probe duration to, One way to do this is illustrated in Fig. 1, where the atom-laser phase difference φ est 1 is applied as a feedforward correction to the laser before the measurement on clock 2. This measurement of φ 2 − (ν 2 /ν 1 )φ est 1 is then a differential phase measurement between the two clocks, which is kept in the invertible range |φ 2 − (ν 2 /ν 1 )φ est 1 | < π/2. The expected reduction of projection noise in this protocol for different atom numbers N 1 has been plotted as the dash-dotted lines in Fig. 2 where we have included the projection noise contributions from both clocks. In addition to the reduction of projection noise plotted, Dick effect noise is absent for the differential measurement, even in the presence of dead time. As shown, the available stability improvement using this protocol scales with the frequency ratio, but it must be supported by a sufficiently precise measurement of φ 1 , requiring √ N 1 ≫ ν 2 /ν 1 . Numerical simulation results, as described below, are plotted along with the analytical estimates in Fig 2. III. NUMERICAL MODEL The arguments outlined above give a conceptual picture of the differential clock comparison protocols that we propose. The purpose of these protocols is to make optical frequency comparisons immune to the dominant sources of laser noise that limit the comparison stability. To include, in detail, laser noise with realistic noise spectra we develop here a Monte-Carlo simulation of the protocols that makes use of experimentally demonstrated values for all noise contributions 1 , taken from the literature. In what follows we describe the basic numerical model, and its application to the lattice-ion measurement described in Sec. II. In Secs. IV and V it is adapted to other frequency ratio measurement scenarios. The laser frequency noise in these simulations is designed to reproduce noise spectra representative of stateof-the-art clock lasers [4,36]. Similarly, differential noise between the two probe lasers is modeled based on published results for active path-length stabilization [31] and coherence transfer through a femtosecond frequency comb [32]. During each clock cycle, both correlated and differential laser frequency noise is generated by filtering pseudorandom white noise in the Fourier domain [37]. The Dick effect in these simulations arises naturally when we introduce dead time to the clock. Specific values for the parameters of the model are provided in the Appendix (Table I). The laser frequency for each clock, labeled j, can be written as, where ν j is the static atomic resonance frequency, n j (t) is the laser noise term, and c j (t) is the frequency correction, which is updated at the end of each clock cycle. Each clock cycle, labeled below with k, consists of the clock probe duration followed by dead time required for steps in the experimental sequence such as detection, loading, laser cooling and state preparation. We have assumed for all of our simulations that the durations of the Ramsey π/2 pulses are short compared to the Ramsey probe duration T . The time-averaged frequency of the clock j laser during cycle k, given byf j,k = ν j +n j,k + c j,k , is used to model the atom-laser phase evolution via Eq. (2). Typically, the phase of the second Ramsey pulse is shifted by −π/2 with respect to the first. If we include a finite excited-state lifetime τ , the atomic transition probability is given by and its inverse is given by Noise reduction for a frequency ratio measurement of a many-atom clock with a single atom clock. Precise laser phase measurements on clock 1 allow the unambiguous determination of the clock 2 phase for probe durations longer than the laser coherence limit, giving a reduction of measurement noise compared to the projection noise limit for asynchronous clock comparisons with otherwise identical noise. Simulation results (points) reproduce the analytical estimates (dashdotted lines) up to the point that the projection noise for the two clocks is comparable. Inset: Minimum relative variance R β,min plotted vs N1, showing that higher atom numbers support greater suppression of the noise. with R −1 (p) ∈ [−π/2, π/2] 2 . We estimate the phase of clock j during probe k using the measurement result p j,k = M j,k Nj , where N j is the total number of atoms and M j,k , randomly selected from a binomial distribution, is the number of atoms measured to be in spin up. In some cases, an additional measurement phase θ j,k is applied and must be accounted for in the phase inversion. In this case, For Fig. 2, for example, we have θ 2,k = −β k φ est 1,k from the feed-forward correction to the laser. By properly accounting for the measurement phases, the phases φ est 1,k and φ est 2,k estimate the real atom-laser phase evolution given in Eq. 2. In our protocols, we take advantage of the fact that much of the noise in these estimates is common mode, and we correct the ratio using only the differential component of the atomic phase measurements. For the kth probe, we set β k to be equal to our current best knowledge of the actual atomic transition frequency ratio, which is updated according to where G β is the gain of the ratio servo. The scaling factor ν 1 should be close to the frequency of clock 1, but it only modifies the gain of the ratio servo, so its accuracy is not critical. Corrections are applied to the laser system itself via where G 1 is the gain of the clock 1 frequency servo. Here we have used the fact that the projection noise of clock 1 is much better than that of clock 2, so that only the phase measurements on clock 1 are relevant, but in principle, both can be used together to feedback on the laser. In order to achieve enough feedback gain to overcome the long-time laser frequency drift, we often must include a second integrator for the laser frequency corrections in the Monte-Carlo model. This is implemented by replacing Eq. 9 above with where G ′ 1 is the gain of the second integrator. A second integrator is not needed for the frequency ratio feedback for the noise we have considered. IV. MEASUREMENTS WITH ENTANGLED STATES OF ATOMS The simulation results in Fig. 2 extend to frequency ratios well beyond those available with the current generation of optical clocks. However, a clock based on N atoms prepared in a maximally entangled Greenberger-Horne-Zeilinger (GHZ) state operates effectively at a frequency N times higher. These states have been produced in the laboratory for small numbers of trapped ions up to N = 14 [38]. Previously, consideration of experimental noise sources including local oscillator noise has made the application of these quantum states for spectroscopy unrealistic for small numbers of atoms [9]. Other quantum states and spectroscopy protocols have been proposed that retain some quantum advantage even in the presence of noise [34,39,40], but none reaches the Heisenberg limit with realistic local oscillator noise. Here, we show that frequency ratio measurements between two clocks with phase-locked local oscillators can take full advantage of the quantum-enhancement at the Heisenberg limit. Consider the case where we replace the single-atom clock of Fig. 2 3. Simulated stability of a comparison between a ytterbium optical lattice clock and an aluminum ion clock operating with five ions in a GHZ state. The fractional clock 1 frequency stability (blue points) and the fractional ratio measurement stability (black points) are shown, along with the common-mode, unstabilized, laser frequency noise (red points) and the differential laser frequency noise (green points). Clock 1 reaches the Dick effect stability limit (blue dashed line), while the ratio stability exceeds that, reaching the calculated projection noise limit for the aluminum ion clock (black dashed line). to provide Heisenberg-limited measurement variance [8], For independent operation of a single clock, LO noise limits the probe time to T = η/(N νσ L ), returning the measurement to the same projection noise limit as that for N unentangled atoms [Eq. (1)] [9]. This has previously been confirmed numerically with a realistic laser noise spectrum [34]. Note, however, that in our clock comparison protocol, the duration of the probe is limited not by laser noise but by the projection noise of clock 1, which may be orders of magnitude smaller. Since a clock operating with atoms in a GHZ state evolves at an effective frequency N ν, the performance of the comparison using our protocol can be determined from Fig. 2 by substituting ν 2 → N 2 ν 2 . We illustrate this with a detailed Monte Carlo simulation of a frequency comparison between a ytterbium optical lattice clock and an aluminum ion clock with five ions in a GHZ state. For this simulation, in addition to the laser phase noise, we include differential phase noise due to path-length fluctuations between the two clocks, and we take into account the finite lifetime of the Al + 3 P 0 state, dead time in both clocks, and the delay between the final π/2 pulses in the near-synchronous Ramsey experiments. We assume that the Al + ions have been prepared perfectly in a GHZ state at the beginning of the Ramsey interval, and after the second Ramsey pulse, the parity of the atomic state is measured with unit fidelity [8]. In this case, during the Ramsey interval the atom state evolves as The increase by a factor of N 2 in phase sensitivity must be reflected in the gain of the frequency ratio feedback such that Eq. (8) becomes, Since the GHZ state is also more sensitive to spontaneous decay, the lifetime of these states is modeled using τ j → τ j /N j . The simulated ratio comparison stability shown in Fig. 3 is found to be consistent with the Heisenberg limit for the Al + clock [Eq. (11)], with a small offset due to the finite lifetime of the 3 P 0 state (τ = 20.6 s [41]). This indicates that the ratio stability is reaching the limit imposed by the atomic coherence of Al + . The averaging period of 35 min to reach a statistical measurement uncertainty of 1 × 10 −18 is reduced by a factor of 25 from an asynchronous clock comparison with otherwise identical laser noise parameters. V. MAXIMUM-LIKELIHOOD PROTOCOL So far, our discussion has focused on the use of one clock with low projection noise to reduce the projection noise of a second clock. However, in a comparison between two clocks with low projection noise (e.g., two optical lattice clocks), it is possible to combine information from the two simultaneous phase measurements to extend the probe time of both. Again, we consider a simultaneous Ramsey experiment on two clocks operating at different frequencies, but in this case the Ramsey probe duration T extends beyond the limits imposed by LO noise for both clocks, and the phase estimate must be modified to accommodate clock phases outside the range [−π/2, π/2]. For a given set of measurement outcomes {p 1 , p 2 } of the two clocks, there are multiple sets {φ est 1,n , φ est 2,m } of the two clock phases indexed by n and m, where φ est j,m = mπ + (−1) m R −1 (p) − θ j . Here, we have dropped the measurement index k for convenience. Note that the additional measurement phase for the second Ramsey π/2 pulse on clock 1 is always θ 1 = 0 whereas, for clock 2, it is set to a random value θ 2 for each probe in order to help avoid ambiguous phase inversion. We calculate the statistical weight W n,m of each possible phase pair via a maximum likelihood analysis such that W n,m = N +∞ −∞ dφ 1 P 1,n (φ 1 )P 2,m (φ 1 )P L (φ 1 ), (14) where is the calibrated prior probability distribution for laser phase noise with standard deviation φ L , and is the probability distribution centered at φ est j,n based on the measurement result for clock j. Here, N is a constant independent of φ est 1,n and φ est 2,m that can be determined by the normalization equality n,m W n,m = 1. These probability distributions are illustrated in Fig. 4 (a). The integral can be performed analytically giving where M is a normalization constant. Here, for the purposes of determining the proper feedback to both the laser and the frequency ratio, the atomic projection noise has been modeled as a Gaussian distribution of variance (∆φ j ) 2 = 1/N j in phase for both clock 1 and clock 2. This model is supported by the simulation, which uses them for calculating frequency corrections in the presence of realistic noise from atomic state projection and laser phase deviations. The feedback corrections of the clock laser and the ratio are applied for all possible phase inversion outcomes, weighted by their normalized relative probability: and The summations in the above equations should in principle run over the range of all integers, but in practice can be truncated because W n,m is negligibly small for large enough |n| or |m|. The ranges n ∈ {−ceil( 6φL π − 1 2 ) , . . . , ceil( 6φL π − 1 2 )} and m ∈ {−ceil( 6φL π ν2 ν1 − 1 2 ) , . . . , ceil( 6φL π ν2 ν1 − 1 2 )}, where ceil(·) denotes the ceiling function which rounds up to the next higher integer, cover the actual atomic phases with 6σ confidence and are used in the Monte Carlo model. The probe duration is limited by phase estimation errors caused by the projection noise of the two clocks. Figure 4 shows the asymptotic fractional ratio measurement stability for a comparison of a strontium optical lattice clock with a ytterbium optical lattice clock for different numbers of atoms. With N 1 = N 2 = 10 atoms, the projection noise phase uncertainty is too large to allow unique inversion for phases outside the range [−π/2, π/2], and the probe duration is limited to near the laser coherence limit at 1 s. As the number of atoms is increased, degenerate inversion outcomes are less likely, and the probe time can be extended to longer than the laser coherence time, up to 30 s, for example, for N 1 = N 2 = 10 4 . The same clocks subject to identical laser noise but run asynchronously with a standard feedback routine give an asymptotic (τ = 1 s) ratio stability of 4 × 10 −17 , with the probe times limited to 1 and 1.2 s for clocks 1 and 2, respectively. Thus, in this case, our protocol provides a reduction in the averaging time by a factor of 2000, with a factor of 100 improvement coming from the elimination of Dick effect noise and the remainder due to extending the Ramsey probe time. VI. CONCLUSION We have described protocols for frequency ratio measurements of optical clocks that use phase-locked LOs to reduce the projection noise by extending the probe time beyond the laser coherence time and eliminating noise due to the Dick effect. We emphasize here that most of these improvements can be realized with laser systems at demonstrated levels of performance, which addresses an immediate issue for the present generation of optical clocks. For example, the suppression of differential laser phase noise via active stabilization of optical paths (e.g. fiber noise cancellation) as well as laser stabilization via femtosecond combs is a standard technique in many labs. One experimental challenge in implementing such a measurement is to integrate path-length stabilization seamlessly across the entire path from one atomic ensemble to the other. In the case of Fig. 3, comparing an aluminum ion clock to a Yb lattice clock, relative phase stability must be maintained between the two experiments, spanning several wavelengths that connect the 578-nm Yb clock laser at the atomic ensemble to the 267-nm Al + clock laser where it probes the trapped ions. While all components of this phase-stabilized frequency chain have been demonstrated, the full implementation will require careful consideration of the sources of differential noise in the system. Similarly, while it remains challenging to produce GHZ states of trapped ions, a number of techniques have been demonstrated, with fidelities above 90% for up to 6 ions in a linear chain [38]. In Fig. 4 we have ignored differential laser phase noise to explore the limits of phase inversion using a maximum likelihood analysis. In order to realize a comparison with an Allan deviation below 1 × 10 −17 and an averaging time of 1 s, differential noise in the femtosecond comb frequency transfer as well as path length noise would have to be reduced below this level. On the other hand, as we envision moving optical clocks out of the laboratory for applications such as relativistic geodesy, the ideas presented here significantly relax the requirements on laser coherence, enabling measurement stability at the current state-of-the-art with laser stability orders of magnitude worse, which might be attained in a robust package. Lattice-ion protocol (Fig. 3) Maximum likelihood protocol (Fig. 4 This work is a contribution of the U.S. government, not subject to U.S. copyright.
2016-04-12T01:30:19.000Z
2015-08-20T00:00:00.000
{ "year": 2015, "sha1": "3b81a0a6272c393e2be9da5ac0e1f62ebf55b358", "oa_license": "publisher-specific, author manuscript", "oa_url": "https://link.aps.org/accepted/10.1103/PhysRevA.93.032138", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "5a98fae54518c4e33173abec7449a5cc9af51654", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
6453323
pes2o/s2orc
v3-fos-license
Genomic Makeup of the Marine Flavobacterium Nonlabens (Donghaeana) dokdonensis and Identification of a Novel Class of Rhodopsins Rhodopsin-containing marine microbes such as those in the class Flavobacteriia play a pivotal role in the biogeochemical cycle of the euphotic zone (Fuhrman JA, Schwalbach MS, Stingl U. 2008. Proteorhodopsins: an array of physiological roles? Nat Rev Microbiol. 6:488–494). Deciphering the genome information of flavobacteria and accessing the diversity and ecological impact of microbial rhodopsins are important in understanding and preserving the global ecosystems. The genome sequence of the orange-pigmented marine flavobacterium Nonlabens dokdonensis (basonym: Donghaeana dokdonensis) DSW-6 was determined. As a marine photoheterotroph, DSW-6 has written in its genome physiological features that allow survival in the oligotrophic environments. The sequence analysis also uncovered a gene encoding an unexpected type of microbial rhodopsin containing a unique motif in addition to a proteorhodopsin gene and a number of photolyase or cryptochrome genes. Homologs of the novel rhodopsin gene were found in other flavobacteria, alphaproteobacteria, a species of Cytophagia, a deinococcus, and even a eukaryote diatom. They all contain the characteristic NQ motif and form a phylogenetically distinct group. Expression analysis of this rhodopsin gene in DSW-6 indicated that it is induced at high NaCl concentrations, as well as in the presence of light and the absence of nutrients. Genomic and metagenomic surveys demonstrate the diversity of the NQ rhodopsins in nature and the prevalent occurrence of the encoding genes among microbial communities inhabiting hypersaline niches, suggesting its involvement in sodium metabolism and the sodium-adapted lifestyle. Introduction Marine ecosystems, in which half of global primary production occurs, are home to oligotrophs that are responsible for the biogeochemical cycle and add an important axis to the Earth's energy balance (Falkowski et al. 1998;Whitman et al. 1998;Azam and Malfatti 2007). Flavobacteria, which belong to the phylum Bacteroidetes, previously called the Cytophaga-Flavobacterium-Bacteroides (CFB) group, occupy proteorhodopsins (PRs) (Giovannoni et al. 2005;Rusch et al. 2007). Among metagenome fragments recruited from the Global Ocean Sampling (GOS) expedition, two assembled flavobacterial genomes harboring the PR gene turned out to represent the dominant taxa in the Northwest Atlantic (Rusch et al. 2007;Woyke et al. 2009). Solar energy is captured and converted into chemical energy by phototrophs that depend either on the chlorophyll-harboring photosynthetic reaction center or on the photoactive retinal-binding rhodopsin. Contrary to the multicomponent photosynthetic reaction centers, which are restricted to six bacterial phyla, single-molecule microbial rhodopsins show wide taxonomic distribution, possibly through horizontal gene transfer between domains and phyla (Bryant and Frigaard 2006;Sharma et al. 2006;Bryant et al. 2007). Despite the great diversity of microbial rhodopsins, these proteins share structural features such as seven-membranespanning helices (Fuhrman et al. 2008). The structure and function of archaeal bacteriorhodopsins (BRs) with the retinal chromophore have been studied most intensively to date. BRs move protons across the membrane out of the cell using light energy to generate an electrochemical proton gradient, which in turn can be used for ATP production (Lanyi 2004). Metagenomic approaches enabled the discovery of PRs, the first rhodopsin of bacterial origin from the uncultured marine gammaproteobacterial SAR86 group (Beja et al. 2000(Beja et al. , 2001. PRs share high sequence similarity with BRs, and light-driven chemiosmotic proton translocation was observed after heterologous expression in Escherichia coli (Beja et al. 2000(Beja et al. , 2001. Recently, in a PR-containing marine flavobacterial suspension, light-driven proton transport activity sufficient for ATP generation was demonstrated (Yoshizawa et al. 2012). Among other well-known rhodopsins that generate proton-motive force, xanthorhodopsins (XRs), which have been discovered first in Salinibacter ruber, are unusual in that they require two chromophores, the carotenoids salinixanthin and retinal, to broaden the spectral range for light harvesting (Balashov et al. 2005). Actinorhodopsins (ActRs), which were recently found in actinobacteria from a hypersaline lagoon, an estuary, and a freshwater lake, are abundant in microbial communities in freshwater ecosystems (Sharma et al. 2009). Halorhodopsins (HRs), which are light-driven inward chloride pumps, exist in halophiles. These rhodopsins may participate in the regulation of ionic content and the osmotic state (Mongodin et al. 2005). Although most rhodopsins function as ion transporters, sensory rhodopsins (SRs) mediate phototaxis or signal transduction, like photoreceptors do (Fuhrman et al. 2008). The amino acid sequences of the proton-pumping rhodopsins necessary for retinal binding are highly conserved and differ from those of HRs, and SRs even lack the functional residues in the retinal-binding pocket. In this study, we determined and analyzed the complete genome sequence of an orange-pigmented marine flavobacterium, Nonlabens (Donghaeana) dokdonensis DSW-6 (Yoon et al. 2006;Yi and Chun 2012). The genome information provides a glimpse to the survival strategy of DSW-6 as a photoheterotroph in the oligotrophic ocean. Importantly, in addition to a typical PR, we found a new type of rhodopsin whose retinal-binding sequences are distinct from those of well-studied rhodopsins. To better understand the characteristics of this new type of rhodopsin, its gene expression level in DSW-6 was monitored under various light intensities, nutrient concentrations, and NaCl concentrations. Similarity searches against completely sequenced genomes and expressed sequence tags were performed, revealing a number of homologs present in the classes Flavobacteriia, Alphaproteobacteria, Cytophagia, and Deinococci, and even a eukaryotic diatom. Finally, the frequency of this rhodopsin family in diverse aquatic ecosystems was investigated by searching through public databases of environmental sequence data. Strain and Culture Conditions Donghaeana dokdonensis DSW-6, recently reclassified as Nonlabens dokdonensis comb. nov. (Yi and Chun 2012), was isolated from the surface seawater collected between the two main islands of Dokdo, Republic of Korea (Yoon et al. 2006). This nonmotile strain grows under strictly aerobic conditions and exhibits optimal growth in the presence of 2% NaCl at 25 C (Yoon et al. 2006). Cells were grown on Marine Agar 2216 (Difco, USA) or Artificial Sea Water (ASW) prepared from sea salts (Sigma-Aldrich, USA) enhanced by 2.5% w/v peptone and 0.5% w/v yeast extract. The strain produces orange-colored carotenoid pigments. Genome Sequencing and Annotation A hybrid approach of Roche/454 pyrosequencing and Sanger sequencing followed by manual gap filling was applied to decipher the N. dokdonensis DSW-6 genome. Shotgun pyrosequence reads of approximately 30-fold genome coverage were generated from GS FLX (NICEM, Korea) and were assembled into 98 contigs using gsAssembler. A total of 2,035 paired-end Sanger sequence reads (GenoTech Co., Korea) from a 35-kb genomic library were incorporated to yield two scaffolds. Genomic regions containing nonribosomal peptide synthetase genes or IS elements could not be properly assembled because of their highly repeated sequence patterns. To disentangle these overcollapsed contigs, additional Sanger sequences were provided by random shotgun sequencing of 35-kb fosmid clones spanning each gap. All the remaining small gaps were closed by sequencing polymerase chain reaction (PCR)-amplified genomic fragments. The PHRED/PHRAP package (Ewing and Green 1998) was used for Sanger read base calling and partial mini-assembly, and all sequence editing procedures were conducted using CONSED (Gordon et al. 1998). The final assembly led to a single chromosome without plasmids. The sequence was validated and errors were rectified by comparing the final assembly with independent sequence data to further increase the accuracy of the assembly and to avoid sequence errors in the homopolymeric nucleotides. Trimmed high-quality 102-bp shotgun reads at 806-fold genome coverage were produced using the Illumina/Solexa GA II system and were mapped to the assembled sequence with CLC Genomics Workbench (CLC bio, Inc., Denmark). Gene prediction of the finished DSW-6 genome sequence was conducted using Glimmer 3.0 (Delcher et al. 2007). Functional assignment of the predicted genes was achieved by searching for homologs in public protein databases, such as UniRef100, Swiss-Prot, the GenBank nonredundant protein database, KEGG, and SMART, using the Basic Local Alignment Search Tool (BLAST) program (Altschul et al. 1997). The outputs were automatically parsed using AutoFACT (Koski et al. 2005), and then all the annotations were manually curated. tRNA scan (Lowe and Eddy 1997) was applied to search for tRNA genes in the genome, and rRNA genes were identified using BLAST. Metabolic pathways were examined using the KEGG database (Aoki-Kinoshita and Kanehisa 2007) and BioCyc, which was generated by the PathoLogic program of the Pathway Tools software (Karp et al. 2010). The sequence and annotation have been deposited in GenBank under the accession number CP001397. The genome information is also available from the Genome Encyclopedia of Microbes (GEM; http://www.gem.re.kr, last accessed January 9, 2013). Phylogenetic Analysis of Microbial Rhodopsins To infer the phylogenetic groups of microbial rhodopsins, exemplary representatives of each class of ActRs, BRs, HRs, SRs, PRs, and XRs were retrieved from the UniProt or GenBank databases. The two rhodopsins of N. dokdonensis DSW-6 and nine NQ-type rhodopsins identified from complete genomes and expressed sequence tags were included in the analysis. and Chaetoceros neogracile KOPRI AnM0002. A multiple sequence alignment was obtained using the MUSCLE algorithm (Edgar 2004), and ambiguously aligned regions were adjusted with the Gblocks program (Talavera and Castresana 2007). To construct phylogenetic trees, we used the MEGA5 package (Tamura et al. 2011) for maximum parsimony (MP) and neighbor-joining (NJ), and maximum likelihood (ML) methods, and MrBayes v3.1.2 (Zhang et al. 2012) for Bayesian inference. Phylogenetic trees were constructed by the NJ method based on the Jukes & Cantor distance model, followed by a 1,000-replicate bootstrap analysis for statistical support. For the application of ML and Bayesian methods, we ran ProtTest v2.4 (Abascal et al. 2005) to determine the appropriate model of amino acid replacement. Each computation was accompanied with 500,000-generation runs of four chains. Finally, Bayesian posterior probability from the set of nonparametric bootstrap replicate samples trees was supported by SumTrees program in DendroPy package (Sukumaran and Holder 2010). The 0.25 fraction of initial number of trees in each file was excluded from the analysis. The image of trees obtained by MrBayes was illustrated by using the Dendroscope 3 program (Huson and Scornavacca 2012). NaCl-Dependent Expression of the NQ Rhodopsin Gene For the study of NQ rhodopsin gene expression, DSW-6 was grown in 30 ml of ASW enriched with an additional carbon source. ASW prepared with sea salt (Sigma-Aldrich, USA) was passed through a 0.2-mm-pore-size filter, enriched by adding 0.15 g of peptone (Bacto Peptone, Difco) and 0.03 g yeast extract (Bacto Yeast Extract, Difco) and then autoclaved. The final dissolved organic carbon concentration was approximately 2,200 ppm, as measured using a TOC-V total organic carbon analyzer (Shimadzu, Japan). Triplicate cultures were incubated at 25 C under continuous light (135.05 mmol m À2 s À1 photons). When the culture's optical density at 600 nm reached approximately 0.4, 3 ml of the culture was centrifuged, and the pellet was stored immediately in 500 ml of RNAlater (Ambion, USA) to yield the untreated control (UC) samples. For the remaining cultures, 4 M NaCl or the carbon-enriched ASW was added into each cell culture to yield NaCl concentrations of 2.5% (0.43 M), 5.0% (0.86 M), 7.5% (1.28 M), 10% (1.71 M), and 12.5% (2.14 M). The culture volume of each sample was adjusted to 50 ml, causing a dilution of approximately 2-fold. To make the 4 M NaCl stock, NaCl was dissolved in the ASW enriched with peptone and yeast extract to maintain the nutrient concentration of the culture medium. When the OD 600 nm of the 2.5% NaCl culture reached approximately 0.4 again, induction (ID) samples were collected in 500 ml of RNAlater by the same method. Samples of 3 ml were collected for the 2.5% NaCl ID cultures, 5 ml samples were collected for the 5.0% NaCl ID cultures, and 6 ml samples were collected for the 7.5%, 10%, and 12.5% NaCl ID cultures. All of the RNAlater-treated samples were stored at 4 C until RNA extraction. Total RNA was extracted using the RNeasy kit (Qiagen, USA). The remaining chromosomal DNA was eliminated using a Turbo DNA-free Kit (Ambion, USA). No amplification was observed after 40 cycles of PCR when using the RNA samples as templates without the reverse transcription step, confirming complete DNA removal. cDNAs were obtained from reverse transcription polymerase chain reaction (M-MLV cDNA Synthesis Kit, Enzynomics, Korea). Real-time PCRs were carried out on a CFX Connect Real-Time PCR Detection System (Bio-rad, USA) with the iQ SYBR Green Supermix (Bio-rad). Each cDNA sample was amplified with specific primers (qNR_F 5'-GAG AAT TAT GTA GGT GCT ACA GAC G, qNR_R 5' GTG CCA AAT TAC CAA GTA ATA CAC CA, q16S_F 5'-AGG ACT TAA CCT GAC ACC TCA C, q16S_R 5'-GGG TTG TAA ACT ACT TTT GTA CAG, qPR_F 5'-CTA AAA TGG CCA CAG ATG ATT ATG TAG, and qPR_R 5'-TTG CAT CAC CAA TGT TGT AAA CTA CG) and quantified in triplicate. The PCR conditions were an initial denaturation step at 95 C for 5 min, followed by 40 cycles of amplification at 95 C for 30 s, 55 C for 30 s, and 72 C for 1 min. Finally, an additional step to establish the melting curve, in which the temperature was decreased from 95 to 65 C (0.05 C s À1 ), was performed. Threshold cycle values (C t ) for each measurement were determined. The relative quantification of gene expression was calculated by using the comparative critical threshold 2 ÀÁÁC T method, in which the amount of the RNA of interest is adjusted to an internal reference RNA (Livak and Schmittgen 2001). The 16S rRNA gene was used as the internal control in this study. The following equations were used: Metagenomic Data Analysis of the NQ Rhodopsins To identify the new type of rhodopsin genes from the metagenomic data sets and to assess their abundance in diverse aquatic environments, approximately 58 Gb of unassembled reads from metagenome sequencing projects were downloaded from the CAMERA (Sun et al. 2011) and IMG/M (Markowitz et al. 2012) databases in October 2011. BLASTX searches of individual reads (identity ! 30%, E value 1e À 5) were performed against an in-house database consisting of prokaryotic rhodopsins selected for phylogenetic reconstruction retrieved from the GenBank and UniProt databases. Identity of the NQ-type rhodopsins among the recruited reads was assured by carrying out BLASTP searches of the translation products against the GenBank nonredundant protein database and selecting ones that matches the NQ-type rhodopsins in the database as one of the top three best hits. The metagenomic reads confirmed as NQ-type rhodopsins were aligned with 10 full-length NQ-type rhodopsins (see earlier), and their phylogenetic positions were assigned by reconstructing trees using the NJ method in the MEGA5 package (Tamura et al. 2011) with 1,000 iterations. If a read is not localized within the NQ-rhodopsin group or shows low phylogenetic correlations (bootstrap value 70), the corresponding read was discarded. Relationships among the 10 NQ-type rhodopsins and the 55 metagenomic reads were further looked into through a clustering analysis, because many of the rhodopsin sequences from metagenome data sets are in small fragments and they often do not overlap to each other. We made a simple assumption that the identity values of a test sequence against the 10 reference sequences will be similar to those of its close relatives. Gap regions among the 10 references were determined by multiple sequence alignment based on the MUSCLE algorithm (Edgar 2004), and amino acid positions corresponding to the gaps were excluded from further analysis. Identity values between the test sequences and the references were calculated using the MEGA5 program with the pairwise deletion option. To estimate the similarities between the test sequences, the identity values were used as input characters for the UPGMA algorithm (Murtagh 1984) with root mean square deviation (Huang et al. 2005). The prevalence of prokaryotic cells that possess the NQ-type rhodopsins in the metagenomic data sets was postulated by the ratio of the NQ-type rhodopsin reads to RecA, RpoB, and EF-Tu reads. These three proteins are commonly encoded by a single-copy gene in most prokaryotic genomes. BLASTX searches of the metagenomic reads from CAMERA and IMG/M against RecA, RpoB, and EF-Tu retrieved from the GenBank and UniProt databases followed by BLASTX of the resulting candidates against the COG database (Tatusov et al. 2003) identified the reads homologous to the three proteins. When estimating the number of each protein, the length of the protein should be taken into account; the longer a protein is, the higher the chance of that protein being sequenced from the metagenome pool. Thus, the number of reads assigned to each protein was divided by the average length of the protein. The normalized proportion (r a ) of prokaryotes containing the new type of rhodopsins in a metagenomic data set was calculated as follows: Genome Properties for Marine Heterotrophy Isolated from the seawater sample taken at Dokdo in the East Sea of Korea, N. dokdonensis (basonym: Donghaeana dokdonensis) DSW-6 is a member of the class Flavobacteriia (Yoon et al. 2006). We determined the complete genome sequence of this marine flavobacterium, which harbors a single 3.9-Mb chromosome (supplementary fig. S1 and table S1, Supplementary Material online). Phylogenetic analysis of 16S rRNA genes suggested Gillisia, Gramella, Psychroflexus, and Zunongwangia as the sister genera of Nonlabens, whereas a tree based on broadly conserved proteins of the flavobacteria, whose genomes have been completely sequenced, indicated that Croceibacter, Gramella, Krokinobacter, Lacinutrix, and Zunongwangia are closely related to Nonlabens (supplementary fig. S2, Supplementary Material online). The genome sequence allowed us to reconstruct in silico the metabolic network of the bacterium. The primary metabolic network, including the amino acid biosynthetic pathways and transport systems, is summarized in figure 1 and supplementary table S2, Supplementary Material online. The genes encoding the enzymes for glycolysis and the pentose phosphate pathway were all present. These pathways provide the precursors of metabolites for nucleotide and fatty acid biosynthesis. All the enzymes in the tricarboxylic acid (TCA) cycle are also well conserved in this genome. Most amino acids are synthesized by DSW-6 itself or are derived from the digestion of proteins by secreted peptidases and then transported in across the membrane, where they serve as the main source of nitrogen. The urea cycle does not exist, but a pathway for the conversion of nitrite to ammonia to glutamine is conserved. Enzymes involved in the synthetic pathway for the three branched-chain amino acids leucine, isoleucine, and valine are missing, but genes encoding the enzymes responsible for the degradation of these amino acids exist in the genome. Physiology for Survival in the Marine Environment As a marine bacterium that inhabits surface seawater, DSW-6 has many attributes that may endow the bacterium with metabolic advantages that allow survival under oligotrophic conditions. Enzymes compensating for carbon limitation in central metabolism are well conserved in DSW-6. Anaplerotic enzymes replenish the pools of metabolic intermediates in the TCA cycle, which are used as precursors for the biosynthetic pathways, and maintain the oxidative carbon flux. Phosphoenolpyruvate (PEP) carboxylase (DDD_3148) and pyruvate carboxylase (DDD_1224) each create oxaloacetate from PEP or pyruvate by adding bicarbonate (Owen et al. 2002). Bicarbonate is imported into the cell by transporters encoded by a SulP-type Na + -dependent bicarbonate transporter (DDD_2127) or the Na + -dependent bicarbonate secondary transporter SbtA (DDD_0780). Carbonic anhydrase interconverts CO 2 and bicarbonate to sustain a sufficient substrate level. The CFB group bacteria are known to be adapted to use high-molecular-weight organic matter, primarily polysaccharides and proteins (Kirchman 2002). In particular, during a mesocosm study of phytoplanktonic blooms, dominating flavobacteria were shown to play leading roles in the degradation of organic matter in micronutrient-rich environments (Riemann et al. 2000). Genome analysis of several marine flavobacteria revealed that these organisms are adapted to use polymeric organic matter that is occasionally available (Bauer et al. 2006;Gonzalez et al. 2008Gonzalez et al. , 2011. Similar to the genomes of these strains, the DSW-6 genome encodes numerous degrading enzymes, including 14 glycosyl transferases, 20 glycosyl hydrolases, and 25 predicted peptidases. This genome also encodes 24 proteins with cell-surfaceadhesion domains that may function in binding to organic particles for efficient breakdown, even though the bacterium is not motile. These proteins benefit the survival of DSW-6 in nutrient-poor oceans where blooms occur only occasionally. DSW-6 shares features with other ocean-inhabiting bacteria that establish a Na + gradient instead of a H + gradient for generating motive force for the first step of their respiratory chain (Unemoto and Hayashi 1993). DSW-6 possesses Na + -translocating NADH:quinine reductase (NQR) subunits as parts of NADH dehydrogenase. In addition to the NADH dehydrogenase subunits, genes encoding the cytochrome c oxidase, cytochrome c, and ATP synthase are present, but genes for the bc 1 complex do not exist (supplementary table S3, Supplementary Material online). Instead, the alternative complex III (Refojo et al. 2010) is present to feed electrons to cytochrome c. DSW-6 has numerous genes to cope with internal or environmental stresses, such as oxidative stress or osmotic shock (supplementary table S4, Supplementary Material online). Many nonphotosynthetic marine bacteria use carotenoids to deal with the damage caused by solar radiation (Hader and Sinha 2005) or to adapt to the cold environment by modulating membrane fluidity (Jagannadham et al. 2000). DSW-6 is orange colored because of carotenoid pigments (Yoon et al. 2006), which are synthesized by clustered crt genes. The repertoire and organization of the carotenoid-biosynthetic genes are conserved between DSW-6, G. limnaea R-8282, and Krokinobacter sp. 4H-3-7-5 (supplementary fig. S3, Supplementary Material online). DSW-6 may express numerous photolyases/cryptochromes related to repairing UVinduced DNA damage and PAS/BLUF/PAC light-sensing domains located in membrane-bound sensor molecules (supplementary fig. S4 and table S5, Supplementary Material online). Carotenoids and these photoproteins are also related to the rhodopsin-based photosystem in the marine euphotic zone. b-Carotene, one of the carotenoid pigments, is a precursor of retinal that binds to rhodopsin, and phylogenetic analysis illustrates the coexistence of photolyase/cryptochrome and PRs among marine flavobacteria (Gonzalez et al. 2008). Identification of a Unique Type of Microbial Rhodopsin When mining through the DSW-6 genome, two rhodopsin genes, as well as the blh gene, whose product is responsible for the oxidative cleavage of b-carotene into two retinal molecules, were uncovered. Retinal binding is essential for rhodopsin function. E. coli BL21(DE3) expressing either of these rhodopsins and the purified protein were pinkish-red in color due to the binding of rhodopsin to trans-retinal, which was provided separately (data not shown). Primary sequence analysis of the translation products suggested that one is a typical PR. Its sequence is highly similar to those of flavobacterial PRs. The functional residues in the ion transfer pathway are all conserved, as in the numerous other proton-pumping rhodopsins. Retinal is predicted to covalently bind to the "-amino group of the Lys-233 residue in DSW-6 PR (Lys-216 for BR), yielding a protonated retinylidene Schiff base. When stimulated by light, Asp-87 (Asp-85 for BR) becomes a proton acceptor residue of the proton from the deprotonated retinal molecule. The proton release group of Arg-84 (Arg-82 for BR) pumps out the proton to the extracellular side of the membrane. Glu-97 (Asp-96 for BR) restores the original protonated form of the retinal molecule. Then, a proton from the cytosol reprotonates the donor residue, and the protonated acceptor residue gives a proton to the release group, and the proton-pumping cycle repeats (Hayashi et al. 2003). Interestingly, the other rhodopsin gene appears to encode a new type of microbial rhodopsin according to the amino acid sequence analysis. Although this rhodopsin has seven transmembrane domains, as do other rhodopsins, its sequence differs considerably. Notably, although carotenoid-binding Lys-255 (Lys-216 for BR) and proton-releasing Arg-109 (Arg-82 for BR) are conserved, the key active site residues in typical proton-pumping rhodopsins (Asp-85 and Asp-96 for BR) are replaced by Asn-112 and Gln-123 ( fig. 2A), suggesting that this protein is functionally unique. We will refer to this Asn-(Xaa 10 )-Gln sequence as the NQ motif henceforth. The NQ Motif-Containing Rhodopsins Form a Distinct Phylogenetic Class Homologous proteins were detected in broadly different taxa through similarity searches of public databases of microbial genomes. Genes encoding rhodopsins of the new type were present in the recently sequenced genomes of three CFB strains and three marine alphaproteobacterial strains ( fig. 2B). Among these strains, two flavobacterial strains, G. limnaea R-8282 and Krokinobacter sp. 4H-3-7-5, and a cytophagal species, H. roseosalivarius, have both the PR gene and the new type of rhodopsin. The deinococcus T. radiovictrix, a radiation-resistant species recovered from sodium-rich hot spring runoff, carried two of these genes in addition to an HR gene and a rhodopsin gene of an unassigned family. All these rhodopsins contain the NQ motif instead of the DD motif, and the existence of genome sequences containing only the new type suggests that these organisms apparently do not require blh and idi for functioning (table 1). Surprisingly, a homologous sequence was found among the expressed sequence tags of the Antarctic marine planktonic diatom C. neogracile, suggesting that this class of rhodopsins is not restricted to prokaryotes. Comparative analyses based on NJ, ML, and Bayesian methods confirm, supported by !99% bootstrap values, that the NQ motifcontaining rhodopsins form a distinct phylogenetic group motif-containing rhodopsin were much lower than those of the PR gene under these conditions. The presence of the unique NQ motif in the newly found class of rhodopsins and the presence of two rhodopsins containing this motif in a bacterium that thrives in a sodium-rich hot spring prompted us to analyze the expression of the NQ rhodopsin gene at various concentrations of NaCl to infer the function of the NQ rhodopsins. The artificial seawater (Huang et al. 2005) used for the culture medium contains 2.5% NaCl and represents the isosaline condition. DSW-6 cells were first grown in ASW enriched with additional carbon and nitrogen sources until the early log phase, and then the concentration of NaCl was set to 2.5%, 5.0%, 7.5%, 10%, or 12.5% (see Materials and Methods for details). Cells exposed to 2.5%, 5.0%, or 7.5% NaCl kept growing, whereas those exposed to 10% NaCl did not grow further after 6 h ( fig. 3A). At 12.5% NaCl, the OD 600 nm values decreased. After 3 h of exposure to each NaCl concentration, total RNA from each sample was extracted, and the relative expression level of the NQ rhodopsin gene was determined. No difference between the before and after treatment time points was observed for the 2.5% NaCl culture, as expected. As the NaCl concentration increased, gene expression was induced. The relative expression level was highest at 10% NaCl (of 375 ± 31.4). In 12.5% NaCl, the level of gene induction was lower than that in 5.0% NaCl ( fig. 3B). In contrast, the expression levels of the PR gene were not affected by the NaCl concentration (data not shown). NQ Rhodopsins Are Frequently Found in Hypersaline Environments To gain information on the diversity and the prevalence of NQ rhodopsins in nature, we searched for this rhodopsin type in metagenomic sequences. A total of 55 reads orthologous to NQ rhodopsin genes in the microbial genomes described earlier were recruited from metagenomic data sets originating from various aquatic environments, including a hypersaline microbial mat, saltern and freshwater lakes, oceans, and even Antarctica (supplementary table S6 Phylogenetic relationships of the genome-derived NQ rhodopsins inferred through MP, NJ, ML, and Bayesian approaches indicated that overall branching patterns are almost identical among the trees ( fig. 4A and supplementary fig. S8, Supplementary Material online). However, grouping of the branches of an NQ rhodopsin in T. radiovictrix DSM 17093 and that in H. roseosalivarius DSM 11622 in the MP, NJ, and Bayesian trees could not be supported in the ML tree (supplementary fig. S8A, Supplementary Material online). Positioning of the metagenomic reads on the NJ tree of the full-length NQ rhodopsins indicated that 40 of them (72.7%) can be anchored to the branches that comprised the ones present in N. dokdonensis, G. limnaea, and Krokinobacter sp., and YP_003706581 of T. radiovictrix ( fig. 4A). Among the reads included in this subfamily are four out of four reads from a hypersaline microbial mat and 12 of 13 reads from the Great Salt Lake. YP_003706581 has the largest number (15) of reads that are closest. Results obtained from clustering of the metagenome reads and the full-length NQ rhodopsins using UPGMA showed that NQ rhodopsins from the metagenomic data sets can be divided into at least four groups ( fig. 4B). Similar to the results from phylogenetic anchoring, 39 reads were clustered with those of the four species (YP_003706581 for T. radiovictrix). Out of the 13 reads from the Great Salt Lake, 10 reads seem to be closely related to YP_003706581. Among the reads from Antarctica, one clusters with the NQ rhodopsin of C. neogracile, whereas six others form another group and appear more distantly related. The occurrence of NQ rhodopsin-carrying prokaryotes in the metagenomic data sets was evaluated using the ratio of rhodopsin genes to conserved single-copy genes (supplementary table S7, Supplementary Material online). A hypersaline microbial mat sample from Guerrero Negro, Mexico, containing approximately 90 practical salinity units had an abundance of the NQ rhodopsin genes as high as 17.36%. In addition, they were found frequent in the samples from the Great Salt Lake (1.48%) and Labonte Lake (2.75%). Based on the abundance analysis of GOS data, however, 0.09% of the prokaryotes inhabiting the sea surface may possess this rhodopsin type. No prokaryotes possessing this gene were found in high salinity ponds from Chula Vista solar salterns, where haloarchaea dominate (Pasic et al. 2009). Discussion The genome analysis of DSW-6 demonstrates that this strain is highly adapted for living in the oligotrophic surface of the ocean. Similar to other ocean-inhabiting bacteria, DSW-6 is thought to use the Na + gradient instead of the H + gradient to generate the motive force for the respiratory chain (Unemoto and Hayashi 1993) and has many Na + -dependent transporter genes. Several stress-response gene products and pigments may protect DSW-6 from the harsh sunlight. In light of light utilization, genes for a PR and retinal biosynthesis in addition to those for light-sensing proteins exist in the DSW-6 genome. In the case of the PR-carrying marine flavobacterium Polaribacter sp. MED152, uptake of bicarbonate increased in the presence of light, suggesting that a PR-mediated proton gradient drives anaplerotic inorganic carbon fixation toward more efficient anabolism (Gonzalez et al. 2008). Similar to MED152, enzymes involved in the anaplerotic pathway are well conserved in DSW-6. Because Beja et al. (2000) discovered PR in an uncultured marine bacterial clade, extensive research, including metagenomic and photochemical approaches, has been conducted to discover the reasons for the great success of this widespread rhodopsin family. Currently, these light-driven proton pumps are considered to power cell growth or extend the survival of marine oligotrophs. Light stimulates growth of the PR-containing marine flavobacterium Dokdonia sp. MED134 (Gomez-Consarnau et al. 2007). Vibrio sp. AND4, another PR-containing strain, exhibited longer survival during starvation than its corresponding PR deletion mutant (Gomez-Consarnau et al. 2010). When expressed in E. coli, a PR from the SAR-86 clade of the Gammaproteobacteria promotes proton-motive force that turns the flagellar motor during light illumination (Walter et al. 2007). Several lines of evidence suggest that the PR in DSW-6 should be a functional equivalent of other PRs and may play similar roles to contribute to the growth or survival of the bacterium in oligotrophic environments. First, it exhibits the key features of a typical PR, and all the functional residues in the proton transfer pathway are conserved. Second, the encoding gene is highly expressed in the presence of light or in the absence of sufficient nutrients. Most significantly, purified PR protein binds to transretinal and pumps out proton (Han S-I, Kwon S-K, Kim JF, and Jung K-H, unpublished data). Microbial rhodopsins are functionally versatile and may transit nonproton ions across the plasma membrane. HRs existing in halophiles pump in chloride instead of proton in a light-dependent manner (Kolbe et al. 2000). The channelrhodopsins of green algae are light-gated cation channels (Kato et al. 2012). In this article, we propose the existence of a new type of rhodopsin with a function that is unique among proton pumps and is related to salinity; this proposal was based on the following: These proteins, which we dubbed the NQ rhodopsins, have seven transmembrane domains FIG. 3.-The relative expression levels of NQ rhodopsin genes in Nonlabens (Donghaeana) dokdonensis DSW-6 at different NaCl concentrations. (A) Cell growth was determined by optical density values at 600 nm for cultures incubated in 2.5% NaCl (circles), 5.0% NaCl (downward triangles), 7.5% NaCl (square), 10% (upward triangles), and 12.5% NaCl (diamonds). After 3 h, cultures were collected for RNA extraction to analyze the gene expression. Error bars indicate the standard deviations for triplicate samples. (B) The NQ rhodopsin expression level was quantified using qRT-PCR and the comparative critical threshold (2 ÀÁÁCT ) method (Livak and Schmittgen 2001). The 16S rRNA gene was used as an internal control, and the NQ rhodopsin transcript at 3 h was quantified relative to the NQ rhodopsin transcript in an UC (2.5% NaCl). The break in the y axis ranges from 60 to 330. Error bars indicate the standard deviations for triplicate reactions. The cylinder symbol represents a metagenomic read originating from hypersaline microbial mat (brown), Greate Salt Lake (red), Yellowstone Lake (orange), urban lake (yellow), Antarctica (white), or oceans (blue). A phylogenetic tree of full-length NQ-rhodopsins was constructed based on the NJ distance analysis of 243 positions. (B) UPGMA clustering of the NQ rhodopsin-like metagenomic reads. The dendrogram was derived from UPGMA cluster analysis using identity values against completely sequenced genes. Metagenome projects from which NQ rhodopsin-like sequences were recruited: hypersaline mat, Guerrero Negro hypersaline microbial mat (CAM_PROJ_HypersalineMat); Great Salt Lake, Great Salt Lake (Gm00191 of GOLD ID in IMG database); Yellowstone Lake, Yellowstone Lake (CAM_PROJ_YLake); urban lake, LaBonte Lake (Gm00212 of GOLD ID in IMG database); Antarctica, Antarctica aquatic microbial metagenome (CAM_PROJ_AntarcticaAquatic); ocean, Global Ocean Sampling expedition (CAM_PROJ_GOS). Truepera radiovictrix DSM 17093 1 , YP_003705905; T. radiovictrix DSM 17093 2 , YP_003706581. and the key amino acids required for carotenoid binding or proton releasing, but the proton-donor and proton-acceptor residues in the PRs (Asp-87 and Glu-97) or the BRs (Asp-85 and Asp-96) are substituted by Asn and Gln. In case of a PR, the replacement of Asp-87 with Asn or of Glu-97 with Gln leads to variants that can no longer pump protons (Dioumaev et al. 2003;Saeedi et al. 2012). In addition, the expression levels of the NQ rhodopsin gene in DSW-6 correlate with illumination, nutrient levels, and, most importantly, the NaCl concentration. It is noteworthy that these novel rhodopsins are widely distributed in geographic environments exposed to high salinity, according to homology searches against genomic and metagenomic data sets. The NQ rhodopsin gene in DSW-6 showed the highest expression at 10% NaCl, which is consistent with the observation from a metagenomic survey that NQ sequences are most abundant in a hypersaline microbial mat with the salinity of 9%. Sodium and chloride are the two most predominant dissolved ions in seawater; which could be the substrate of the NQ rhodopsins? An environmental clue supports the possibility that the NQ rhodopsins act as sodium pumps. A strain of T. radiovictrix isolated from a high-sodium environment (Albuquerque et al. 2005) has two NQ rhodopsin genes in its genome. Moreover, the key functional residues of the chloride-pump HRs differ from those of the NQ rhodopsins (Kolbe et al. 2000). In particular, Asn-112 and Gln-123 of the NQ rhodopsin in DSW-6 are Thr and Ala in the HRs, respectively. Site-directed mutagenesis of Asp-85 in a proton-pump BR to Thr or Ser enables the protein to transport chloride inward as an HR does (Sasaki et al. 1995). As sodium pumps, NQ rhodopsins may simply be involved in the regulation of the osmotic state. However, NQ rhodopsins can be more productive if they function in generating sodium-motive force. Some extremophilic archaea and marine bacteria use the sodium-motive force to power energy-transducing machinery, either exclusively or by coupling it with the proton-motive force (Albers et al. 2001). In the sodium-rich marine environment, employing the sodium-motive force instead of the proton-motive force can be advantageous for marine bacteria that require sodium for their growth (Kogure 1998). These bacteria generate sodium-motive force by possessing either a respirationdependent primary Na + pump that directly couples Na + translocation to a chemical reaction or a Na + /H + antiporter that converts the H + gradient generated by primary H + pumps into a Na + gradient (Hase et al. 2001). The cell yields of Dokdonia sp. MED134 cultures exposed to light decrease in the presence of an inhibitor of Na + -translocating NQR (Kimura et al. 2011). Among the transcripts of MED134, Na + -translocating NQR and Na + transporters were greatly up-regulated in cultures grown under light (Kimura et al. 2011). Therefore, the sodium ion gradient essentially functions in light-enhanced growth promotion in a PR-containing marine bacterium, suggesting that the existence of sodium-pumping rhodopsins is plausible. Restricted distribution of the NQ rhodopsins to bacterial species in the classes Alphaproteobacteria, Cytophagia, Flavobacteriia, and Deinococci suggests the horizontal transfer of the encoding genes among bacteria inhabiting in the saline environments. Furthermore, identification of the same type from the expressed sequence tags of a diatom in the Coscinodiscophyceae raises an interesting possibility of interdomain gene transfer between bacteria and eukaryotes. Diatoms are ubiquitous eukaryotic phytoplankton in the ocean (Armbrust 2009;Amin et al. 2012), and the genus Chaetoceros is considered as one of the most abundant and widespread diatom groups (Nagasaki 2008). Metagenomic analysis indicates that, even from the data sets that were supposed to be mostly prokaryotic, a number of NQ rhodopsin-like sequence reads are most closely related to that of C. neogracile. Taken together, our analysis reveals a novel family of rhodopsins whose genes are broadly present in microorganisms adapted to ecosystems that may experience hypersaline conditions and which seem to play an important role in sodium metabolism. This finding echoes the coexistence of the H +pumping ATPases and the Na + -pumping ATPases (Morth et al. 2011).
2016-05-12T22:15:10.714Z
2013-01-01T00:00:00.000
{ "year": 2013, "sha1": "038dc110ce4ced589cc8ffe289fd8dfd51f28810", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/gbe/article-pdf/5/1/187/17919193/evs134.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e9560120b399156bb910cd6af1ba0e525ae13b64", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
237509643
pes2o/s2orc
v3-fos-license
The effect of preoperative hypoalbuminemia on complications after primary hip arthroplasty Objectives To explore the risk factors of preoperative hypoalbuminemia and its’ effects on complications in the elderly with primary hip arthroplasty. Methods A total of 211 elderly inpatients who underwent hip arthroplasty were collected. All patients were divided into the control group (preoperative serum albumin ≥35 g/L) and case group (preoperative serum albumin <35 g/L). The risk factors of preoperative hypoalbuminemia and the postoperative complications were analyzed. Results Compared to controls, hypoalbuminemia patients were older (P = 0.026), had lower BMI (P = 0.045), higher cardiac function score (P < 0.0001), higher ASA scores (P = 0.023), and longer hospital stay (P < 0.001). The intraoperative albumin loss in the case group was significantly higher than that of in control group (P < 0.001), but there was no significant difference in operation time and intraoperative blood loss between the two groups (P >0.05). Compared to controls, hypoalbuminemia patients had a higher risk for any complication (P = 0.014), such as delayed wound healing, pleural effusion, and pneumonia. The risk of postoperative complications increased by 6.9% with every 1 year old is increasing (age > 60). The risk of postoperative complications in the case group was 1.89 times higher than that in the control group. Conclusion Patients with older age, poor nutritional status, and more than 2 concomitant diseases are more likely to develop preoperative hypoalbuminemia. Preoperative hypoalbuminemia is related to the increased incidence of postoperative complications. Perioperative albumin loss is not only due to perioperative blood loss, but also related to vascular permeability and abnormal albumin metabolism. Introduction Serum albumin is the most abundant protein in human plasma, which plays an important role in maintaining normal physiological function [1]. Clinically, hypoalbuminemia was defined as a serum albumin <35 g/L [2]. It is very common in a clinic, and it often occurs in elderly patients with chronic diseases such as hypertension, diabetes, and malnutrition. With the increasing "aging" of the society, more and more elderly patients need hip arthroplasty surgery. However, the elderly has the low compensatory ability, more concomitant diseases, and insufficient ability to tolerate hypoalbuminemia. Perioperative hypoalbuminemia has a great impact on the elderly, and the risk of postoperative complications such as wound infection, pneumonia, and limb swelling also increases [3]. At present, many studies have proved that hypoalbuminemia is related to many postoperative complications and adverse consequences, such as wound infection, pneumonia, and cardiac arrest [4][5][6]. However, most of these studies focus on preoperative malnutrition and set preoperative hypoalbuminemia as one of the indicators. In addition, the samples of some studies include not only patients with an initial hip replacement, but also patients with knee arthroplasty and joint revisions. Therefore, there is no precise conclusion on the incidence of preoperative hypoalbuminemia and postoperative complications in the elderly with primary hip arthroplasty. So, the aim of this study is to investigate the risk factors of perioperative hypoalbuminemia in hip arthroplasty and the relationship between preoperative albumin level and postoperative complications in the elderly. All the elder patients were divided into the control group (preoperative serum albumin ≥35 g/L) and case group (preoperative serum albumin <35 g/L) so that the perioperative conditions and postoperative complications of the two groups were observed and analyzed retrospectively. The results of the study could provide a basis for improving the prognosis of patients after hip arthroplasty. Material The data of 211 patients who underwent primary hip arthroplasty from January 2019 to December 2020 were collected. Among them,118 patients underwent total hip arthroplasty (THA), and 93 cases underwent hemi-hip arthroplasty (HHA). There were 70 males, 141 women, aged from 60 to 95 years old, the average age was 74.21 ± 8.46 years old, and the average BMI was 22.50 ± 2.40 kg/m 2 (Table 1). There were 115 cases of a femoral neck fracture, 22 cases of the intertrochanteric femur, 32 cases of osteoarthritis, 32 cases of osteonecrosis of the femoral head, and 10 cases of congenital dysplasia of the hip joint. The inclusion criteria are as follows: (1) primary hip arthroplasty, (2) patients whose age was ≥ 60 years old, and (3) patients with complete case data and examination results. The exclusion criteria are as follows: (1) revision hip arthroplasty, (2) patients less than 60 years old, (3) patients who underwent bilateral hip arthroplasty at the same time, (4) patients with a history of operation or infection of the hip joint, and (5) patients with hematological diseases, blood coagulation disorders, known malignant tumors, or infections. Methods According to the preoperative serum albumin level, all patients were divided into the control group (preoperative serum albumin ≥ 35g/L) and case group (preoperative serum albumin < 35g/L): collecting the case information and clinical data of all patients and recording the perioperative laboratory data of all patients (serum albumin, hemoglobin, C-reactive protein, etc.), preoperative health status and intraoperative blood loss, operation time, and intraoperative blood loss as well as the occurrence of postoperative complications. Patients in both groups were treated with artificial hip arthroplasty by the lateral approach. According to the patient's age, femoral neck fracture, daily activity, and other indicators, the chief surgeon decided to perform THA or HHA. In this study, all cases were treated with uncemented joint replacement. For the patients with THA, the drainage tube was indwelled after the operation and removed within 48 h after operation. The patients with HHA had no drainage tube after the operation. Guide and assist the patient to walk 1-2 days after the operation. All patients were routinely given antibiotics to prevent infection for 24 h after arthroplasty, and oral anticoagulants continued for 4 weeks after 1 week of subcutaneous injection of anticoagulants (if unexpected, stop in time). Patients were given intravenous albumin supplementation if their serum albumin was less than 30g/L at 3 days after the operation. Statistical analysis Frequency (percentage) was calculated for qualitative data, and the mean ± standard deviation was calculated for quantitative data. An independent t test or Wilcoxon rank-sum test was used to examine the difference in continuous variables between groups, whereas the chisquare test or Fisher's exact test was used to compare the difference in categorical variables. A 2-tailed P value < 0.05 was considered statistically significant. All Demographic and preoperative data A total of 211 hip arthroplasty patients were included in the study. They were divided into control group (n = 131) and case group (n = 80). Compared with the control group, the patients in the case group were older (75.88 ± 8.92 vs 73.20 ± 8.04 years, P = 0.026), BMI was lower (22.08 ± 2.03 vs 22.76 ± 2.57 kg/m 2 , P = 0.045), and hospital stay was longer (14.99 ± 5.72 vs 12.46 ± 3.16 days, P < 0.001), ASA score and cardiac function score in case group were higher than that of in control group (P < 0.05). There was no significant difference in diagnosis (P = 0.643), mode of operation (P = 0.619), and mode of anesthesia (P = 0.937) between the two groups (Tables 1 and 2). Among the concomitant diseases before the operation, the proportion of cardiovascular system, diabetes, bedsores, abnormal liver, and kidney function and more than two kinds of concomitant diseases in the case group was significantly higher than that in the control group (P < 0.05). There was no significant difference in the respiratory and digestive system diseases between the two groups (P > 0.05) ( Table 3). Intraoperative condition and perioperative albumin changes All patients underwent THA or HHA surgery. Because the two surgical methods may affect the operation time, intraoperative blood loss, and other related factors, the control group was divided into the THA control group (n = 75) and HHA control group (n = 56), and the case group was divided into the THA case group (n = 43) and HHA case group (n = 37). Compared with the THA control group, the average serum albumin level before the operation, 1 day and 3 days after the operation, was lower in the THA case group (P < 0.05), but there was no significant difference on the 7th day after the operation (P > 0.05). Compared with the HHA control group, the average level of serum albumin before and 1 day after the operation was lower in the HHA case group (P < 0.05), but no significant difference at 3 and 7 days after operation (P > 0.05) (Fig. 1). The intraoperative albumin loss in the control group was significantly higher than that of in case group (P < 0.001), but there was no significant difference in operation time and intraoperative blood loss between the two groups (P>0.05) (Fig. 2). significantly higher than that of in control group (P < 0.05). There was no significant difference in the incidence of deep venous thrombosis, dyspepsia, constipation, and electrolyte disturbance between the two groups (P > 0.05) ( Table 4). In order to explore the influence of sex, age, BMI, fracture, previous medical history, mode of operation, preoperative hypoalbuminemia, and other factors on the occurrence of postoperative complications, the related factors were assigned (Table 5), and the univariate analysis of related factors was made ( Table 6). It is suggested that sex, age, mode of operation, and preoperative hypoalbuminemia have an influence on the occurrence of postoperative complications (P<0.1). Binary logistic regression analysis was performed on the univariate factors related to sex, age, mode of operation, and preoperative hypoalbuminemia. The results showed that the Hosmer-Lemeshow inspection χ 2 =2.899, P= 0.941 (P > 0.05). It is suggested that there is no statistically significant difference between the predicted value of the model and the actual observed value, and the prediction model has good calibration ability. Sex and mode of operation were not independent influencing factors of postoperative complications (P > 0.05), but age and preoperative hypoalbuminemia could significantly affect the occurrence of postoperative complications (P < 0.05). The risk of postoperative complications increased by 6.9% with every 1 year old is increasing (age >60). And the risk of postoperative complications in patients with preoperative hypoalbuminemia (serum albumin < 35g/L) was 1.89 times higher than that in patients with normal preoperative albumin(preoperative serum albumin ≥ 35g/L) ( Table 7). Discussion In this study, according to the set inclusion and exclusion criteria, all eligible patients from January 2019 to December 2020 were included, including 131 in the control group and 80 in the case group. Although the number of people in the control group is more than that in the case group, the situation is natural and closer to the real world than the equal number of people in the control group and the case group. Main discovery This study found that patients with more preoperative concomitant diseases were more likely to develop hypoalbuminemia. According to the results of this study, the proportion of patients with cardiovascular disease, diabetes mellitus, bedsore, abnormal liver and kidney function, or more than 2 concomitant diseases before the operation was higher in the case group (P < 0.05). Hypertension is one of the most common cardiovascular diseases in the world. It has been found that the pathogenesis of hypertension is closely related to the damage of vascular endothelial function [7]. Microvascular permeability is increased in patients with hypertension. When complicated with trauma or surgery, a large number of inflammatory cytokines are released, which aggravates the injury of capillary endothelial cells, causes vascular leakage, results in hypoalbuminemia [8]. In patients with diabetes, their insulin receptor is inhibited under stress, the oxidative metabolism of glucose is abnormal, and the negative nitrogen balance is more stubborn and obvious [9]. Abnormal liver and kidney function will reduce its protein synthesis ability and plasma protein synthesis, such as hypoproteinemia caused by cirrhosis. Patients with hypoalbuminemia were more likely to have postoperative complications. Compared with the control group, patients with hypoalbuminemia had a higher incidence of postoperative complications, especially delayed wound healing, pleural effusion, and pneumonia. And there was no significant difference in deep venous thrombosis, dyspepsia, constipation, and electrolyte disturbance of lower extremities. Meanwhile, we used binary logistic regression analysis to analyze the correlation between sex, age, BMI, fracture, and preoperative albumin level, the result showed that the risk of postoperative complications increased by 6.9% with every 1 year old is increasing (age >60), and the risk of postoperative complications in patients with preoperative hypoalbuminemia (serum albumin < 35g/L) was 1.89 times higher than that in patients with normal preoperative albumin (preoperative serum albumin ≥ 35g/ L). In this study, there was one dead patient who was admitted to our hospital for surgical treatment because of a femoral neck fracture. The patient had a history of hypertension and after hysterectomy for endometrial carcinoma. The causes of death were acute renal failure, acute cerebral infarction, and metabolic acidosis. Although the patient had hypoalbuminemia before the operation, the main cause of death was preoperative abnormal liver and kidney function and basic physical condition, rather than preoperative hypoalbuminemia. In addition, we found that intraoperative blood loss cannot be used to predict albumin loss separately. The results showed that the amount of postoperative albumin loss in the case group was significantly lower than that in the control group, no matter in the patients who received THA or HHA (P < 0.05), but there was no significant difference in operation time and intraoperative blood loss between the two groups (P > 0.05). It is suggested that intraoperative blood loss is not the only cause of postoperative albumin loss, but is also related to vascular permeability or abnormal albumin metabolism. As for the changes of serum albumin during the perioperative period, the results of this study showed that the serum albumin levels of the THA case group before the operation, 1 day and 3 days after the operation was lower than those of the THA control group (P < 0.05), but there was no significant difference on the 7 days after the operation (P > 0.05). And the albumin level in the HHA case group was lower than that in the HHA control group before operation and 1 day after the operation, but there was no significant difference between 3 days and 7 days after the operation (P > 0.05). All patients in this study were given exogenous albumin supplementation if their serum albumin was less than 30g/L at 3 days after the operation. This may be one of the main reasons why there is no significant difference in albumin level between the two groups 7 days after the operation. Compared with previous studies The results of this study are consistent with other findings that have a higher incidence of complications in malnourished patients receiving THA. However, most studies only regard preoperative hypoalbuminemia as one of the influencing factors and do not deeply study the relationship between preoperative hypoalbuminemia and postoperative complications of THA. At the same time, other studies are different from the results of this study in some aspects. Newman et al. [6] found that compared with the control group, hypoalbuminemia patients had an 80% higher risk of any complications, a 113% higher risk of major complications (such as pulmonary embolism, acute renal failure, myocardial infarction, etc.), and a 79% increased risk of minor complications (such as wound infection, blood transfusion, lower extremity deep venous thrombosis), and a 97% increase in reoperation risk. The high incidence of postoperative complications was different in our study. This may be related to factors such as the ratio of fracture patients and the different preoperative concomitant diseases in the two samples. In addition, preoperative hypoalbuminemia also affects the incidence of complications after other joint replacements. Blevins et al. [10] found that preoperative hypoalbuminemia was a high-risk factor for PJI and showed good sensitivity and specificity in predicting PJI. Kamath et al. [4] evaluated patients treated with TKA and found that patients with hypoalbuminemia had higher incidences of deep surgical site infection, pneumonia, urinary tract infection, and sepsis than those with normal albumin levels. Patients with hypoalbuminemia had a higher risk of death and coma than patients with hypoalbuminemia who required unplanned tracheal intubation, intraoperative or postoperative blood transfusion, retention of the ventilator for more than 48 h, and coma. Any complications and infections (systemic and traumatic) are also more common. However, in our study, although there was one death in preoperative hypoalbuminemia, the cause of death was mainly due to poor basic conditions and abnormal liver and kidney function, rather than hypoalbuminemia. These studies have shown that preoperative hypoalbuminemia can significantly affect the incidence of complications after joint replacement. Limitations We admit that there are still some limitations in this study. The main cases included in this study were traumatic patients with femoral neck fracture and intertrochanteric fracture (64.9%), while fewer patients underwent elective surgery such as osteonecrosis of the femoral head and osteoarthritis of the hip joint (35.1%). Some studies have shown that compared with elective patients, trauma patients have older age, longer hospital stay, more intraoperative complications, and lower preoperative serum albumin levels [11]. Furthermore, only patients over 60 years old were included in the study. Compared with young and middle-aged patients, elderly patients were more likely to be complicated with underlying diseases such as cardiovascular disease, abnormal liver, and kidney function, and these basic diseases may cause patients' albumin levels to be lower than normal. And because of the retrospective single-center nature of this study, the extensibility of this study is limited. Conclusion In this study, we found that hypoalbuminemia patients who needed primary hip replacement were more elderly, poor nutritional status, cardiovascular system, diabetes, bedsores, liver and kidney dysfunction, and other concomitant diseases. Patients with hypoalbuminemia have a longer hospital stay and a higher incidence of postoperative complications such as delayed wound healing and pneumonia. At the same time, the cause of postoperative hypoalbuminemia is not only blood loss during the perioperative period, but is also related to abnormal vascular permeability and albumin metabolism. We innovative found that the risk of postoperative complications increased by 6.9% with every 1 year old is increasing (age >60). And the risk of postoperative complications in patients with preoperative hypoalbuminemia (serum albumin < 35g/L) was 1.89 times higher than that in patients with normal preoperative albumin(preoperative serum albumin ≥ 35g/L). Therefore, for patients with hypoalbuminemia who need a primary hip replacement, active treatment of concomitant diseases before operation and adequate albumin supplement in the perioperative period are helpful to reduce the risk of postoperative complications.
2021-09-15T13:37:38.518Z
2021-07-19T00:00:00.000
{ "year": 2021, "sha1": "ae42088d5f1f861360e64c637104aa8c831f4d0c", "oa_license": "CCBY", "oa_url": "https://josr-online.biomedcentral.com/track/pdf/10.1186/s13018-021-02702-0", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ae42088d5f1f861360e64c637104aa8c831f4d0c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
226782696
pes2o/s2orc
v3-fos-license
Applying BIM and 3D laser scanning technology on virtual pre-assembly for complex steel structure in construction Steel structure needs to be assembled before the components are transported to the site to ensure the installation of the steel structure successfully. The traditional assembly is a physical assembly method, which needs an amount of equipment, occupies large areas and can be labor and time-consuming. To address these issues, industrial photogrammetry has been developed to acquire data, and the steel structure has been virtually assembled in a virtual environment. The high-precision data acquisition of steel components is now carried out by using 3D laser scanner, and therefore, the feature data of those components can be automatically extracted by programming. As a result, the accurate linear shape of the post-splicing components can be extracted after completing the precise assembly of the steel structure in a virtual environment, which is significant for the subsequent mechanical analysis. Introduction The research described in this paper aims to explore an efficient and accurate digital pre-splicing method. The 3D laser scanning is used to obtain high-quality, all-round 3D digital models of building components, and then numerical analysis processing is performed by MATLAB and extract key parameters, such as bolt holes. In the computer, we need to adjust the posture of the splicing component to predict whether the splicing component can be spliced smoothly.This is substantial to achieve the purpose of predicting whether the construction is smooth. Previous experience and research have shown that before the components are transported to the site, the steel structure needs to be assembled to ensure smooth installation on the construction site. Trial assembly requires sufficient space, labor, and almost the same mechanical equipment as the site, which accounts for a large total cost of manufacturing steel components (10%-25% of the total cost) [1] . Compared with the trial assembly, when the steel component is completed, the component feature is acquired by a data acquisition, and then the computer program is used to simulate the splicing posture, and the component which cannot be spliced regardless of the posture is subjected to the reverse field processing. To achieve the effect of predicting whether the component meets the splicing requirements this method of virtual assembly is being applied by more and more scholars. Literature review Virtual pre-assembly is the current development trend of the construction industry. Some engineering projects in China have already applied this method to guide the construction, and achieved some good results [2][3][4] . However, there are some limitations of data acquisition methods, many engineering cases only acquire data for a small number of key control points. To achieve the all-round, high-efficiency, high-precision virtual pre-assembly effect, we must be more effective in grasping and utilizing data acquisition. Engineering case There are some representative virtual pre-assembly projects in China. Tang Jiyu [2] used the total station instrument to collect and install the components feature data at Kunming Airport. The corresponding model was established by computer, and the virtual pre-splicing was carried out based on the control points of the respective models. And achieved the goal of better prediction of smooth splicing. Li Yadong and others [3] in the Shanghai Center Building measured the various control points of the components (the outer contour feature points of the control points and the bolt hole group positioning points), and the pre-assembly was realized by coordinate transformation. Ding Yifeng [4] et al. used a similar principle to conduct a pre-study on the layer of the third ring of Shanghai Central Building. The general process of the case-study is:(1)Use the total station to measure the control points.(2)Splicing by coordinate transformation. With the continuous updating and (3)Development of data acquisition technology, comprehensive and accurate data acquisition is no longer a problem. Industrial photogrammetry system Among them, industrial photogrammetry system is a widely used data acquisition method, such as the CATS system [5] jointly developed by Yokogawa Bridge Corporation of Japan and Changgang University of Technology, the measurement system introduced by GOM [6] in Germany (0.2mm for 10*5*5m3 volume), V-STARS [7] measurement system of American GIS Company. In addition, the 3D photogrammetry system of Beijing Tianyuan [8] , China, has a measurement accuracy of 0.1mm/4m. The XJTUDP three-dimensional optical point measurement system of Xi'an Jiaotong University [9] has a measurement accuracy of ±0.15mm/m. These industrial photogrammetry systems have high accuracy in data acquisition. 3D laser scanning technology In recent years, the three-dimensional laser scanning technology which is known as "real-life reproduction technology" has been widely used in the construction industry. The general principle of 3D laser scanning is to use laser ranging to obtain a large number of accurate and dense 3D data information on the surface of the target object. Compared with the traditional measuring tools, it has a lot of unparalleled advantages which is fast, high precision and all-round. With the continuous development of 3D laser scanning technology, the scholars at home and abroad have made an in-depth application research. Xie Hongquan [10] et al. conducted a systematic test study on the accuracy of ground 3D laser scanner ranging accuracy. Based on Leica HDS Scan Station 2 three-dimensional laser scanning system, Li Haiquan [11] and others explored several important factors affecting measurement accuracy and its control methods. Cao Xiange [12] et al. through the study of a series of factors affecting the accuracy of 3D laser scanning in the application process, it is concluded that under the control of scanning distance, scanning point spacing, point cloud stitching accuracy and other factors, this will guarantee the accuracy of 3D laser scanners. Compared to industrial photogrammetry, 3D laser scanning has many advantages:(1)Easy to operate(2)Can obtain large-volume component characterization information at the same time (3) In summary, under the conditions of guaranteed distance, scanning point spacing, illumination and other factors, a 3D laser scanner can be used to obtain high-quality, all-round digital models of building structures to meet the requirements of accurate extraction of data features. Moreover, the use of threedimensional laser point cloud for component pre-assembly can intuitively check whether the steel beam can be successfully spliced. A real-world model of components formed by dense 3D laser point cloudsthe rich BIM. Virtual pre-splicing implementation model establishment This part describes the steps of data acquisition, feature information extraction, and pose splicing of 3D laser scanning technology. For general components, virtual pre-splicing can be realized through these steps. FARO330) to collect data on the components to be spliced. In order to ensure the quality of splicing, it must be controlled at a suitable distance, set appropriate scanning parameters and the temperature must be suitable. The target balls are guaranteed not to be arranged on the same line. In particular, focus on the splicing control area to ensure the quality of feature point extraction. circle is fitted.In order to simplify the algorithm, the stereoscopic bolt hole cylinder is transformed into a plane projection through its own direction vector when fitting the circle.There are a few noise points in the bolt hole. The key is how to delete the noise points in the bolt hole through the idea of program iteration. The center radius obtained from a large number of dense high-quality point clouds can ensure the high quality of virtual pre-assembly. [13]proposed a pre-assembly method based on bolt hole position, improved orthogonal Platts analysis method EOPA, using this method to digitally pre-assemble the Chernobyl nuclear power plant shield project successfully. This model also uses this classical algorithm to adjust the attitude of the circle coordinates of the extracted bolt hole point cloud to determine whether the two steel beams can be assembled smoothly. Algorithm adjustment gesture assembly.Italy F.Case Assuming there are P points, the measured coordinates and theoretical coordinates are expressed by matrix A and B, respectively.At the same time, the unknown rotation matrix is T, the unknown displacement vector is t, and the unknown error matrix E. , and perform matrix singular value decomposition (SVD) on {SST}and {STS}to: Bring T and t into the formula (1), you can get the error matrix E. Pre-assembled component linear extraction. In the case of obtaining the final pose of the virtual pre-assembled structure, the component point cloud is used for linear extraction to prepare for the next stage of mechanical analysis. Example 3.2.1 Data acquisition.Apply the above model to this example to illustrate its application steps. The Fujiang Bridge in Tongnan in Tongnan is 576m in length. The main bridge is a 57m+128m+220m single-tower and double-cable-plane composite girder cable-stayed bridge with 405 meters in length, 36.6 meters in width (including cable-stayed area) and 156 meters in height. Traditional total station can not effectively locate the location of bolt holes in steel beam splicing. The distance between bolt holes is 100 mm and the standard diameter of bolt holes is 16.5 mm. The distance between bolt holes is 100 mm and the standard diameter of bolt holes is 16.5 mm. In order to ensure that two adjacent steel beams can be spliced under the manufacturing line (stress-free line), it is necessary to pre-splice them in order to predict whether the construction can proceed smoothly.The way to collect data is like above, and will not be described here. 3.2.3Algorithm adjustment gesture assembly.The virtual pre-splicing of two steel beams is carried out by using the above algorithm, and the deviation values of the X and Y directions of the bolt holes corresponding to the two steel beams after attitude optimization are counted. In order to study the assembling process of steel beams more visually, the corresponding point clouds were adjusted after adjusting the parameters by using the algorithm. .  Figure 12. two steel beam line shape Conclusion This paper proposed an efficient and accurate digital pre-splicing method, which can achieve that the numerical analysis software automatically extracts enough splicing feature point data, after obtaining the panoramic entity refined point cloud digital model. It has the advantages of not occupying space, ensuring the high-altitude dangerous operation, assembling smoothly, and predicting whether it can be spliced, compared with the traditional trial assembly. To be more specific, this method has the following characteristics: (1) The batch fitting of program automation and feature data are proposed, which greatly improves the efficiency. The traditional manual extraction of bolt hole features is far less efficient than the automatic extraction of features by the algorithm, which has far-reaching significance to ensure the duration of the project. (2)Based on the characteristics of a large number of precise point cloud fitting, the accuracy is guaranteed sufficiently. The fitting bolt hole radius and ideal radius are distributed in a normal way, and the deviation is within 0.4 mm, which shows that the fitting effect is good. (3)In the virtual pre-assembly of two steel beams, the data show that the deviation distribution in X and Y directions is less than 3 mm.(4)The precise alignment can be extracted from point clouds after virtual stitching, which has important guiding significance for mechanical analysis. This method may need to be improved in the following aspects:(1)Noise points of a point cloud will affect the accuracy of fitting bolts, so how to reduce noise more effectively needs further study. (2)In this paper, the plane is used to replace the three-dimensional cylindrical analysis, while stabilizing the effective reverse cylinder, the default bolt hole sag meets the requirements, which requires higher construction technology. Further research may be to extract relevant features by using artificial intelligence related algorithms, which is expected to further improve the efficiency and accuracy of feature data extraction.
2019-12-19T09:15:56.899Z
2019-12-01T00:00:00.000
{ "year": 2019, "sha1": "5969291ae4a5f1d42f76518c89f268aa9d421c2c", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/371/2/022036", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "b3f3e96ce98dfaeb4a3fa9c70cd916adcacebf62", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Physics", "Computer Science" ] }
235166112
pes2o/s2orc
v3-fos-license
Few-Shot Upsampling for Protest Size Detection We propose a new task and dataset for a common problem in social science research:"upsampling"coarse document labels to fine-grained labels or spans. We pose the problem in a question answering format, with the answers providing the fine-grained labels. We provide a benchmark dataset and baselines on a socially impactful task: identifying the exact crowd size at protests and demonstrations in the United States given only order-of-magnitude information about protest attendance, a very small sample of fine-grained examples, and English-language news text. We evaluate several baseline models, including zero-shot results from rule-based and question-answering models, few-shot models fine-tuned on a small set of documents, and weakly supervised models using a larger set of coarsely-labeled documents. We find that our rule-based model initially outperforms a zero-shot pre-trained transformer language model but that further fine-tuning on a very small subset of 25 examples substantially improves out-of-sample performance. We also demonstrate a method for fine-tuning the transformer span on only the coarse labels that performs similarly to our rule-based approach. This work will contribute to social scientists' ability to generate data to understand the causes and successes of collective action. Introduction A common data collection task in social science is applying fine-grained labels to documents, including extracting specific passages from text. In many cases, social scientists already have many coarselylabeled documents and a small number of handannotated documents. An automated technique for "upsampling" from coarse labels to more detailed information could help researchers produce better tailored datasets. However, this process does not fit the tools that applied researchers have access to: OWOSSO --On Saturday, supporters of Bernie Sanders held the first of two rallies at City Hall in anticipation of Michigan's presidential primary election Tuesday. The rally featured a crowd of roughly 30 to 40 people and kicked off at 2 p.m. In 2016's presidential primary, Sanders beat Hillary Clinton by a slim margin of 49.8. Coarse Label: size category 1 (10-100 attendees) Gold Span: "30 to 40" Figure 1: Documents in our corpus have "coarse labels" reporting the order of magnitude of the protest size and "gold spans" reporting the exact size of the protest. The frequency of number words (in bold) shows why this task is not trivial. training a document classifier on coarse labels will not produce the fine-grained answers. Innovations in zero-shot and few-shot classifiers and information extraction (IE) techniques show promise, but new methods are required that can also draw on the existing coarse document annotations to improve fine-grained extraction. We introduce a new task and dataset for improving information extraction systems' performance when given many coarsely-labeled documents and a small number of documents annotated with the spans of interest. 1 We draw on a dataset on dissent and collective action (hereafter, "protests") in the United States compiled by the Crowd Counting Consortium (2020) (CCC) to construct our training and evaluation data. Protests are an important avenue for social change and of major interest for social science researchers. Current work suggests that attendance is a major factor in the success of a protest movement (Chenoweth and Margherita, 2019), but good data on protest attendance is difficult to collect. CCC compiles structured data about protests from expert annotators using news report-ing, including the exact text span from the article that describes the protest's size and the order of magnitude of the crowd size. An example is given in Figure 1. The task we propose is to locate the span within a document that reports the size of a protest, given a training set of documents labeled with the order of magnitude of the protest ("coarse labels") and a small number of document pieces (25) with exact span information ("gold spans"). Drawing on recent work in question answering, we repurpose existing models to generate finegrained labels given a large set of coarsely-labeled documents and a small set of documents with finegrained labels. We provide results from three baseline models, finding that a heuristic, rule-based system outperforms a zero-shot transformer-based question-answering (QA) model. Fine tuning on a small set (25) of gold spans substantially improves performance. We also introduce a new multitask model that reaches equivalent performance despite fine-tuning on no gold spans. Task and Data For each protest in the CCC dataset, we collect the following data: the raw article text (scraped from the CCC-provided URLs), the exact string reporting the protest size, and a "size category" provided by CCC that reports the order of magnitude size of the crowd. The task is to predict the size text string, given plentiful training data with the size category and the gold spans for a small set of partial documents (25 paragraphs). The test set includes only the full article texts and order-of-magnitude information. To make the task tractable, we exclude protests that are coded from multiple documents and documents from which multiple protests are coded. From 48,736 total protests reported by CCC between January 21, 2017 and October 31, 2020, we eliminate multi-document/multi-protest reports and successfully scrape text for 11,005 protests. We eliminate documents where the CCC-reported size text is not located within the document, leaving 3,849 protests/documents. We split these data into four parts: • Coarse label training set: text with coarse, order-of-magnitude labels {0,1,2,3} but no exact answer spans (2,694 full articles). • Gold span training set: short texts with exact answer spans but no order-of-magnitude labels (25 paragraphs). • Validation set: documents with order-ofmagnitude labels and exact answer spans (200 full articles). • Test set: documents with order-of-magnitude labels and exact answer spans (930 full articles). The task is challenging because models are not evaluated on the largest portion of the data (coarse document labels) but rather on a fine-grained span prediction task for which only limited data is available. The task can thus be framed in several ways, depending on which parts of the data are used and in what ways: • Zero shot: use an off-the-shelf model to detect protest sizes without any fine tuning on our data, either coarse or fine. • Few-shot on gold spans: fine tune a baseline model on the small number of gold span labelled data. • Coarse labels: use a coarse-to-fine model to identify spans given only document-level labels. • Coarse labels + gold spans: train a model using both coarse order-of-magnitude labels and limited fine-grained span data. Related Work The task we propose relates to several strands of research. One framing is as a question-answering task (QA), where the same question ("How many people protested?") is asked about each document. A large set of NLP tasks can be framed as questionanswering models (McCann et al., 2018) and QA models trained on language models can generalize to new domains with few or no labeled examples (Brown et al., 2020;Radford et al., 2019). QA models have also been successfully used when the training data is noisy (Lin et al., 2018). Given the flexibility of QA models and their strong performance in new domains, we use one as the base of our models. A different framing is as a "rationale" problem for a document classifier. Lei et al. (2016) train a classifier on document-level labels and use attention weights to extract rationales for the classification. Our task differs from the canonical document classification task because a responsive model is evaluated on the extracted spans, not on the coarse label prediction task. Distant supervision uses noisy labels, often applied automatically or with heuristic labels, to train systems (Ratner et al., 2017). The classic example of distant supervision uses a database of relations to label binary relations in text (Mintz et al., 2009). Weak supervision, more generally, uses labels that are noisy or coarse to train fine-grained models (Khetan et al., 2018;Robinson et al., 2020). Some work on "noisy labels" relates to our task, where labels are presented at a higher level of aggregation rather than with noise. Nayak et al. (2020) propose a model that uses coarse, document-level sentiment labels to train a fine-grained, sentence-level sentiment classifier. Their task differs from ours in the nature of their labels: in moving from documentlevel to sentence-level labels, they predict labels of the same type (sentiment scores). In our task, we also change the labels themselves, from a crowd size order of magnitude to a token-level label of whether a word describes the exact protest size. Modeling Strategy We first attempt the task using a rule-based model (the "heuristic keyword model") and an off-theshelf zero-shot QA system. We then introduce a multi-task neural network model based on a pretrained transformer language model. We fine-tune and evaluate this model on the coarse labels and gold spans, as well as on noisy labels we generate through a rule-based procedure. The two standard performance metrics for question answering tasks are exact match and F 1 (Rajpurkar et al., 2018a). We compute exact match as the sum of exact matches (predicted spans exactly matched in the set of correct target spans) divided by the total number of documents. We compute F 1 per document based on token-level precision and recall, then average across documents. Heuristic Keyword Model Our heuristic model is a rule-based system that uses keyword matching and dependency parses to return a single number-containing phrase from the article. We first locate all number-containing phrases (digits or number words) in the text with regular expressions. Using a rule-based system, we convert these number phrases to a numeric form (e.g. "several dozen" → 36) and then compare the phrase's numerical value to the protest's reported order of magnitude. If the phrase does not match the order of magnitude, we eliminate it from our candidate list. To further reduce the candidate list, we look for number phrases that occur within the same sentence as a set of keywords such as "crowd", "gathered", or "protesters". 2 If multiple sentences have keyword matches, we return the first one. The CCC data's size spans include modifiers alongside the raw numerical values (e.g. "about 20", "more than 50"). We use dependency parse information generated by spaCy to extract the wider span. 3 Zero-Shot QA Model We begin with a pre-trained RoBERTa model (Liu et al., 2019) that we subsequently fine-tune for question answering using the Stanford Question Answering Dataset (SQuAD) 2.0 as described in Appendix A (Rajpurkar et al., 2018b). 4 The QA model architecture is depicted on the left side of Figure 2. Because we do not tune this model on our dataset, we consider its predictions to be zero-shot. Fine-tuned QA Model To use the coarse labels, we add an additional objective to the QA model that is trained to predict the crowd size order of magnitude. The model first predicts the start and end token vectors for a given context-question pair. We compute the cumulative sum (over tokens) of the predicted start token vector and the reverse cumulative sum for the predicted end token vector. The resulting vectors are element-wise multiplied to produce an attention mask with high values in the range of tokens between the predicted start and end tokens. We apply 2 The complete list is "protesters", "demonstrators", "gathered", "crowd", "rallied", "attended", "picketed", "protest". 3 Specifically, (1) for each sentence matching a keyword (2) identify the word in the sentence that is a number word or numeric, and (3) also include child nodes that had the following labels: adjectival modifier, modifier of quantifier, compound, adverbial modifier. We used spaCy version 2.3.2 with the en core web lg model to perform the dependency parsing and sentence segmentation. 4 We use roberta-base from HuggingFace (Wolf et al., 2020). an L1 penalty to this mask to ensure the attention focuses on a small number of tokens. The attention mask is then element-wise multiplied with the token hidden states produced by RoBERTa. Global max pooling and a single linear regression layer applied to these attended-to hidden states predict the coarse label (as shown in the right side of Figure 2). The loss function for the multitask model, an unweighted combination of crossentropy loss and mean squared error, is indicates whether token i is the start of an answer span, y i ∈ {0, 1} indicates whether token i is the end of an answer span, z is the document's coarse label, and n is the number of tokens (512, here). The model can be fit to data including any combination of these three targets. Results Results on the test set are given in Table 1. RoBERTa QA refers to RoBERTa fine-tuned on SQuAD 2.0. With only fine-tuning on SQuAD 2.0, the model scores 17% exact match accuracy and 27% F 1 . On their own, the heuristic-derived spans outperform zero-shot RoBERTa QA. "+ Heuristic spans" indicates that the given model was finetuned on the spans identified by the heuristic model. Fine-tuning the multitask model on the coarse labels alone results in a 180% increase in exact match accuracy and 100% increase in F-score. An example prediction made by the multitask coarse labels Table 1. Actual span in bold. model is shown in Figure 3. 5 However, the highest scores are achieved by fine-tuning the RoBERTa QA model on just the 25 gold spans: 67% exact match accuracy and 65% F-score. The greatest performance by a multitask model without any gold spans is achieved by the model fine-tuned on both the coarse labels and the heuristic spans: 66% exact match and 63% F 1 , just below the top performing model with access to the gold spans. We interpret the success of this model and the coarse labels model over the base RoBERTa QA model as evidence that our attention masking strategy was successful at upsampling from coarse document-level labels to specific token-level spans. Discussion and Conclusion Social scientists often find themselves with coarsely-labeled text data for which upsampling may provide valuable additional information. We anticipate applications in extracting fine-grained policy proposals from party manifestos with document-level annotations (Lehmann et al., 2017), the specific armed actors engaged in civil war violence from documents labeled with "rebel" or "government" (Lyall, 2010), or the specific phrases in news text that lead to their censorship (King et al., 2013). We also see applications in upsampling ranges of causalities from NGO reports or Wikipedia articles to the exact sizes, upsampling years to more specific dates, or using rounded numbers from financial disclosures or government reports as coarse supervision for extracting the exact amount from text. Improvements in zero-and low-shot models should encourage applied researchers to explore computational approaches to text analysis even when training data is scarce, noisy, or coarse-common challenges that are often perceived as intractable. At the same time, NLP researchers should continue to improve models that can learn to extract fine-grained information given coarse training data. Multitask QA models show promise in doing so, but future work can further integrate work from the weak/distant supervision literature, including modeling the noisiness of the labels. Impact Statement Studies of protests have the potential for serious ethical concerns. Some tasks, such as identifying or de-anonymizing the participants in a protest could produce major harms. Our application, identifying the number of attendees at a protest, has less potential for harm. Our collection of information on the size of protests will generally accord with the desires of protesters. Social scientists have long seen protests as an important tool for social movements to overcome collective action problems: by making support for a position visible in the streets, a protest assures potential supporters of the protest that their opinions are held by others and that the group could potentially achieve its ends with more support (Kuran, 1989;Petersen, 2001;Tarrow, 2011). Providing better information on the size of protests furthers the signalling and information-disseminating objectives of the protesters themselves. While we might not agree with the causes of all protesters in the United States, we believe that on-balance, our work benefits those with less power more than it does those with greater power, who can likely already collect the information they seek manually. The data that we draw on was collected by the Crowd Counting Consortium, which relies on volunteers and paid research assistants to collect the data. Their protocol was reviewed by the University of Denver IRB and deemed exempt because they do not collect personally identifiable information and use only public data. 6 A second consideration in our work involves the role of copyrighted news text in our project. Our method uses copyrighted news text that we scraped from the web. While scraping websites is legal in the United States, 7 redistributing copyrighted text is more difficult to justify and depends on how the use fits into the fair use doctrine. Balancing copyright holders' rights with public and educational benefit is at the core of the fair use doctrine. 8 Our attempt to balance the harms to copyright holders and the harms to broader public and scientific benefit is to publish a URL list and scraper so that our corpus can be re-created by future researchers. Additionally, in cases where a researcher is attempting to replicate our work for educational purposes, we will make our scraped corpus available for the narrow purpose of replicating our work. A Fine-tuning RoBERTa on SQuAD 2.0 A.1 SQuAD 2.0 Fine-Tuning In order to facilitate extensions to the standard QA model, we perform the fine-tuning of RoBERTa on SQuAD 2.0 ourselves (Abadi et al., 2015). We fine-tune on the SQuAD 2.0 training set for three epochs using the settings recommended by Nandan (2020). We use a batch size of 12 due to memory limitations. We use the Adam optimizer with a learning rate of 5e − 5. Our model achieves 0.78 and 0.74 exact match on the training and evaluation sets, respectively. We use this model only as a basis for subsequent fine-tuning and therefore do not attempt to match state-of-the-art performance on the SQuAD 2.0 evaluation set. The model is trained on two RTX 2080 Ti GPUs. Model size and training time details are provided in Table 2. We allow the QA model to identify impossibleto-answer questions by predicting the sequence start token ("<s>") as both the answer span start and end token. To fit within the RoBERTa base model's 512 token limit, we pre-process all text inputs via a shingling procedure. We limit contexts to 450 tokens thereby allowing questions of up to 62 tokens in length. We then pad to a uniform 512 tokens. When contexts exceed 450 tokens, we use a sliding window of 450 tokens that we step through the context 225 tokens at a time. We guarantee all samples generated from large contexts contain precisely 450 tokens by adjusting the first and last window positions such that they do not extend before or after the first or last context token, respectively. We aggregate predictions across shingles by assuming one predicted span per document and selecting the predicted span from the shingle for which max i∈[1,...,512] (x i )+max i∈[1,...,512] (ŷ i ) is the greatest. A.2 Task-Specific Fine-Tuning The selection of learning rate for these models, 5e-6 (exactly one order of magnitude lower than the default used for SQuAD fine-tuning), was due to our sensitivity to overfitting on the very small set of span examples. All models were trained for 150 batches, each batch comprising 12 samples chosen from the training datasets with replacement. When multiple datasets are used to train the same model, batches alternate between them. We selected the number of batches for training by observing exact match accuracy on the validation set over a range of iteration steps from 1 to 400 and selecting the earliest batch iteration at which validation set accuracy appeared to plateau. B Results The full set of fine-tuning data combinations is given in Table 3. All models c through i are trained using the same hyperparameters and strategy (Adam optimizer, 5e-6 learning rate, and 150 batches of size 12 examples each). Table 3: Exact match and token-level F 1 performance by each model on test and validation set data.
2021-05-25T01:15:57.395Z
2021-05-24T00:00:00.000
{ "year": 2021, "sha1": "31129ea14f1143c7d37767927db702b85299745b", "oa_license": "CCBY", "oa_url": "https://aclanthology.org/2021.findings-acl.325.pdf", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "31129ea14f1143c7d37767927db702b85299745b", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
237938498
pes2o/s2orc
v3-fos-license
The Relationship between Metabolic Syndrome and Smoking and Alcohol Experiences in Adolescents from Low-Income Households Metabolic syndrome (MetS) in children and adolescents is increasing globally and the age of onset is gradually decreasing. MetS is associated with serious health problems and presents an early risk for adult morbidity and mortality. From 2014–2019, we investigated the relationship between MetS and health behaviors such as smoking, alcohol consumption, and nutrition education in Korean adolescents (boys: 1235, girls: 1087, age: 13–18 years) based on household income; the relationship with hand grip strength was also evaluated. The prevalence of MetS was 8.8% in boys and 5.1% in girls; in the lowest income households, the risk increased ~1.5-fold for boys and ~4-fold for girls, whereas risks of smoking and alcohol use increased 1.81 vs. 2.34 times, and 2.34 vs. 2.37 times for boys and girls, respectively. In adolescents with the weakest grip strength, the risk of MetS increased 9.62 and 7.79 times in boys and girls, respectively. Girls lacking nutrition education exhibited a 1.67-fold increased risk of MetS, but this was not significant in boys. Low household income increased the risk of unhealthy behaviors such as smoking and alcohol consumption in both sexes, and together with low hand grip strength, was an important predictor for developing MetS. Introduction Metabolic syndrome (MetS) is defined as a cluster of factors that increase the risk for cardiovascular disease (CVD) and diabetes, including increased waist circumference, high systolic blood pressure, high triglyceride (TG) levels, elevated fasting blood sugar, and low high-density lipoprotein cholesterol (HDL-C) levels [1][2][3]. The overall prevalence of MetS is 22-44% [4]. In addition to the increase in obesity worldwide, the prevalence of MetS in children and adolescents is increasing [5]. The prevalence of MetS in children and adolescents in the total population is known to be 3.3%, and it has been reported that the prevalence of obesity in adolescents has increased to 29.2% [6]. Childhood MetS contributes to serious health problems and is an early risk factor for considerable adult morbidity and mortality [7]. The increased prevalence of MetS is mainly associated with abdominal obesity and increased sedentary lifestyles [8]. Therefore, in the field of public health, much attention is focused on health behavior modification to influence lifestyle changes in the general public in order to reduce obesity and increase physical activity [9]. Health behaviors are habits that affect the health of individuals. This includes behaviors that promote health, such as physical activity and proper nutrition, as well as behaviors that increase the risk of disease, such as smoking and alcohol consumption [10]. Many of the leading causes of death and illness result from poor health behaviors. Unhealthy behaviors further increase the risk of MetS [11], and the more unhealthy behaviors in childhood, the higher the MetS in adulthood is predicted [12]. In particular, among health behaviors, physical activity is a factor that is greatly affected by the family physical activity environment in children and adolescents [13]. In addition, physical activity levels should be considered more specifically, as they can influence the way blood glucose is processed [14]. Previous studies have reported that lack of physical activity is associated with lower hand grip strength (HGS) [15][16][17]. Low HGS is a risk indicator for MetS in adults, and a similar risk is exhibited in adolescents [18]. A recent study reported a nonlinear relationship between HGS and the prevalence of MetS in adolescents [19]. HGS is correlated with low muscle mass and total strength, and HGS testing is a simple and safe measure that can predict many risks, including diabetes, CVD, and mortality [20,21]. Studies have shown that adult physical activity level and HGS are related to household income [22][23][24], and it was shown that parental socioeconomic status impacts children's health behaviors, such as physical activity, smoking and alcohol [25][26][27][28]. In addition, there is considerable evidence supporting an inverse relationship between socioeconomic status and prevalence of MetS. It is known that the prevalence of MetS is significantly higher in households with the lowest household incomes compared to households with average and high household incomes [29,30]. Adolescents from socio-economically disadvantaged environments are more likely to engage in unhealthy behaviors, which is thought to increase the prevalence of MetS, but previous studies alone have limitations in explaining the relationship between them. Reducing the risk of MetS in adolescents from low-income households requires an improved public awareness that health behavior modification should be a major focus. Therefore, this study investigated the effect of the relationship between household income, HGS and health behavior on the prevalence of MetS in Korean adolescents. It hypothesized that adolescents from low income households would exhibit a higher prevalence of MetS and a higher risk of engaging in unhealthy behaviors such as smoking and alcohol consumption than adolescents with high household income. Participants To investigate the relationship between health behaviors related to household income and the prevalence of MetS, 2322 adolescents (boys: 1235, girls: 1087) aged 13-18 years who participated in the Korea National Health and Nutrition Survey (KNHNS) from 2014 to 2019 were included in this study. Initially, a total of 2778 adolescents consented and participated to provide data for research purposes during the study period. However, those who did not complete the MetS risk factor measurement (n = 12), had no HGS measurement (n = 74), had no income information (n = 10), did not provide information on smoking and alcohol (n = 12), or did not provide information on nutrition education (n = 348) were excluded from the study (Figure 1). In this study, in order to comply with research ethics, the purpose of the research and the purpose of the examination of the results were explained to adolescents and legal guardians, and written informed consent was obtained. In addition, it was approved by the research ethics committee of the affiliated institution, the Korean Disease Control and Prevention Agency (2015-01-02-6C, 2 January 2015) and Gangneung-Wonju national university (R2020-16, 18 March 2020). Metabolic Syndrome In this study, the criteria proposed by Cook et al. were used to diagnose MetS in adolescents [31]. Since the criteria for MetS in adults have not been formally defined or applied to children or adolescents, Cook et al. modified the adult criteria to the closest representative values available from pediatric reference data to diagnose MetS in adolescents. MetS was defined as the presence of three or more of the following components: waist circumference ≥ 90th percentile, blood pressure (BP) ≥ 90th percentile, HDL-C ≤ 40 mg/dL, fasting blood glucose ≥ 110 mg/dL, and TG ≥ 110 mg/dL. Additionally, having a history of drug treatment for any of these components was defined as possessing that component. Metabolic Syndrome In this study, the criteria proposed by Cook et al. were used to diagnose Me adolescents [31]. Since the criteria for MetS in adults have not been formally define applied to children or adolescents, Cook et al. modified the adult criteria to the cl representative values available from pediatric reference data to diagnose MetS in ad cents. MetS was defined as the presence of three or more of the following compon waist circumference ≥90th percentile, blood pressure (BP) ≥90th percentile, HDL-C mg/dL, fasting blood glucose ≥110 mg/dL, and TG ≥110 mg/dL. Additionally, havi history of drug treatment for any of these components was defined as possessing component. Hand Grip Strength An individual's HGS level is highly correlated with the level of physical activity. vious studies reported that lower HGS is associated with lower levels of physical act and lower muscle mass. In this study, HGS data were analyzed to assess the particip physical activity level [22]. HGS was measured using a digital dynamometer (TKK 5 TAKEI, Niigata, Japan). Participants assumed a standing position with backs stra while looking straight ahead with legs shoulder-width apart, and both feet facing ward. The arms were allowed to hang naturally, ensuring that the elbows or wrists not bent, with the arms not touching the torso. Also, care was taken to maintain the posture during measurement of the HGS. The grip of the dynamometer was adjuste that the second joint of the participant's index finger was at 90° [32]. The maximum was measured three times each on the left and right by crossing both hands, and a period of 60 s was provided between the measurements. For the analysis, the data o hand with the highest absolute value measured among both hands was used. The v obtained by dividing the highest measured absolute value by the body weight was malized to a percentage and used as a relative value. The measured grip strength graded using quartiles for analysis, with the strongest group classified as G1 and weakest group as G4. Hand Grip Strength An individual's HGS level is highly correlated with the level of physical activity. Previous studies reported that lower HGS is associated with lower levels of physical activity and lower muscle mass. In this study, HGS data were analyzed to assess the participant's physical activity level [22]. HGS was measured using a digital dynamometer (TKK 5401, TAKEI, Niigata, Japan). Participants assumed a standing position with backs straight while looking straight ahead with legs shoulder-width apart, and both feet facing forward. The arms were allowed to hang naturally, ensuring that the elbows or wrists were not bent, with the arms not touching the torso. Also, care was taken to maintain the basic posture during measurement of the HGS. The grip of the dynamometer was adjusted so that the second joint of the participant's index finger was at 90 • [32]. The maximum HGS was measured three times each on the left and right by crossing both hands, and a rest period of 60 s was provided between the measurements. For the analysis, the data of the hand with the highest absolute value measured among both hands was used. The value obtained by dividing the highest measured absolute value by the body weight was normalized to a percentage and used as a relative value. The measured grip strength was graded using quartiles for analysis, with the strongest group classified as G1 and the weakest group as G4. Household Income and Health Behaviors The socioeconomic characteristics were determined using an interview survey, and the monthly income of the parents was used to define the household income of the adolescents. Measured household income was graded with G1 for the highest income group and G5 for the lowest income group using the quintile for analysis. Health behaviors with regard to smoking, alcohol consumption, and nutrition were surveyed using self-reported questionnaires with "yes" or "no" responses. Data Analysis SPSS 25.0 (SPSS Inc., Chicago, IL, USA) was used for data analysis. The Shapiro-Wilk test for normality was performed and the main variables for analysis did not exhibit a normal distribution (p < 0.05). Therefore, a non-parametric statistical method was applied to compare the general characteristics of the sex analysis (Table 1) and the MetS and the non-MetS groups ( Table 2). Among the general characteristics in Table 2, continuous variables were expressed as means and standard deviations, and a non-parametric Mann-Whitney test was used. Income quintiles, HGS quartiles, and categorical variables such as nutrition education, smoking and alcohol use (Table 3) were recorded as percentages, and the chi-square test was performed. For the prevalence of MetS, the prevalence of MetS risk factors, and smoking and alcohol exposure based on household income level, logistic regression analysis was performed to calculate the odds ratio (OR). The correction variables were crossed with age, household income, HGS, nutrition education, and smoking and alcohol experiences. The significance level was set to p < 0.05, and the confidence interval (CI) of the odds ratio was set to 95%. General Characteristics of Participants Participants were classified based on sex, and the general characteristics are shown in Table 1. There was no significant difference in age when boys and girls were compared, but there were significant differences in height, weight, and body mass index (BMI). MetS Prevalence According to Risk Factors, Household Income, HGS and Health Behaviors MetS was diagnosed in 109 (8.8%) of 1126 boys and 55 (5.1%) of 1032 girls. When the risk factors for MetS were compared between the non-MetS and MetS groups, there were significant differences in waist circumference, systolic blood pressure (SBP), diastolic blood pressure (DBP), TG, fasting blood glucose, and HDL-C for both boys and girls. There was no significant difference between the non-MetS and MetS groups regarding the household income of boys, but there was a significant difference for the girls (p = 0.039). Relative HGS was significantly different between non-MetS and MetS groups for both boys (p < 0.001) and girls (p < 0.001). Health behavior factors included nutrition education, smoking and alcohol consumption. Regarding nutrition education, there was no significant difference in boys, but there was a significant difference between non-MetS (21.9%) and MetS (17.5%) groups in girls (p = 0.028). There was no significant difference in smoking and alcohol consumption for both boys and girls (Table 2). MetS Odds Ratio According to Household Income, HGS and Health Behaviors Household income was analyzed by its classification into quintiles with G1 representing the highest group and G5 the lowest. In the group with the lowest household income, the risk of developing MetS increased 1.45 times (p = 0.041) for boys and 4.05 times (p = 0.018) for girls compared to the group with the highest household income. HGS was analyzed by grouping it into quartiles. G1 represented the strongest HGS group, and G4 the weakest. In the group with the weakest HGS, the risk of developing MetS increased 9.62 times (p < 0.001) for boys and 7.79 times (p < 0.001) for girls. Among health behavior factors, nutritional education increased the risk of developing MetS 1.67 times in girls without education compared to girls with education (p = 0.043), but did not increase significantly in boys. Smoking and alcohol did not significantly increase the risk of developing MetS in exposed adolescents compared to non-exposed adolescents ( Table 3). Relationship between Smoking and Alcohol Experience, and MetS Risk Factors In the case of smoking exposure, there were significant differences in waist circumference (p = 0.043), SBP (p = 0.041) and DBP (p < 0.001) in boys, but there were no significant differences in TG, fasting blood glucose, and HDL-C. In girls, there was a significant difference only in DBP (p = 0.010) based on smoking exposure. In the case of alcohol consumption, there were significant differences in waist circumference (p = 0.001), SBP (p = 0.012), DBP (p = 0.001), and TG (p = 0.043) in boys, but there was no significant difference in fasting blood glucose and HDL-C. In girls, there was a significant difference only in waist circumference (p = 0.022) according to alcohol consumption (Table 4). Smoking and Alcohol Consumption Odds Ratios according to Household Income In the group with the lowest household income, the risk of experiencing smoking increased by 1.81 times (p = 0.020) for boys and 2.34 times (p = 0.012) for girls compared to the group with the highest household income. The risk of alcohol consumption increased 2.34 times (p = 0.012) for boys and 2.37 times (p < 0.001) for girls in the group with the lowest household income compared to the group with the highest household income (Table 5). Discussion The prevalence and risk factors of MetS are affected by individual psychosocial factors [33,34] and health behaviors such as exercise, nutrition, alcohol intake and smoking [35][36][37]. The combination of poor health behaviors such as lack of physical activity, poor diet, smoking, and excessive alcohol consumption with factors such as frequent stress and low socioeconomic status greatly increases the prevalence of MetS. Recent studies have reported that socioeconomic factors such as household income are related to the prevalence of MetS and CVD [38]. In addition, low household income is strongly associated with adolescents' level of health behavior awareness, which may have a significant impact on increased prevalence of MetS [39]. The mechanism by which low household income influences health behavior cannot be fully explained by the issue of simply being unable to purchase health-promoting goods and services. Logically, smoking and consuming alcohol are behaviors that expend resources on products that are unhealthy, whereas an exercise such as walking is a behavior that requires no expenditure. Ironically, it is known that individuals with lower household incomes spend more on unhealthy behaviors and participate less in health-promoting behaviors that cost little [9]. Previous studies investigated the relationship between household income and health behavior to explain why these contrary health behaviors are observed in individuals with low household incomes. According to a study by Lantz et al., people with lower household incomes were more likely to seek aid in controlling their mood through smoking, overeating, drinking and inactivity when faced with stressful situations [40]. Siahpush et al. reported a lack of knowledge and access to information on how health behavior affects health risks as another reason [41]. Individuals with fewer learning opportunities or lower educational attainment due to unfavorable socioeconomic circumstances may be less motivated to adopt healthy behaviors as their knowledge of the risks of unhealthy behaviors may be limited. This study attempted to determine whether household income is a factor influencing the prevalence of MetS and health behavioral factors such as smoking, alcohol, nutrition education experience, and physical activity in adolescents as well. In the results of this study, the prevalence of MetS in adolescents based on household income differed according to sex, and only girls exhibited a significant inverse relationship. However, the risk of developing MetS for both boys and girls showed a tendency to increase with lower household income. These results imply that all adolescents who are placed in an unfavorable socioeconomic environment are exposed to an increased risk of MetS, regardless of sex. HGS is correlated with total strength and muscle mass related to physical activity among health behaviors, and weak HGS indicates a lack of physical activity [20,21]. In addition, weak HGS is a clinical indicator of stamina deterioration, as it has been associated with many negative health outcomes [42]. As expected, low HGS in this study was a strong predictor of MetS in both boys and girls. In adolescents, the risk of developing MetS significantly increased as the HGS decreased. Balanced nutrition prevents many diseases and plays an important role in improving physical and intellectual efficiency [36]. In particular, an unfavorable socioeconomic environment is considered as one of the causes of poor nutrition. In the results of this study, the prevalence and risk of MetS were significantly increased only in girls without nutrition education. These results are thought to be due to the fact that adolescent girls are more sensitive to changes in body weight and body shape according to diet than boys and have higher adherence to nutrition education. For both smoking and alcohol consumption, the risk of exposure significantly increased with lower household income for both boys and girls. Although the prevalence and risk of MetS did not significantly increase in the smoking and alcohol consuming boys, the risk factors for MetS such as waist circumference, SBP, DBP and TG were significantly higher. Also, in the girls, DBP was significantly higher when they smoked, and waist circumference was significantly higher when they consumed alcohol. These results are partly consistent with the results of a study by Slagter et al. [37] who reported that smoking and alcohol were highly correlated with an increased risk of abdominal obesity and hypertension. Previous studies have reported a significant decrease in HDL-C levels with higher smoking frequency and alcohol consumption. However, this study did not find any concordant results by analyzing experiences with only "yes" or "no" responses. The components of MetS, such as low HDL-C, high BP and fasting blood glucose, and abdominal obesity, are each considered a disease. However, even if the number is at the borderline level, it is necessary to carefully consider each risk factor because several of them form a cluster and contribute to an increase in the risk of CVD and diabetes. In this study, adolescents who were placed in socioeconomically disadvantaged environments such as low household income had weak muscle strength and less health-related knowledge such as nutrition education. This was also demonstrated to significantly increase the likelihood of exposure to the risk of engaging in smoking or drinking alcohol. This suggests that adolescents in low-income households are more likely to experience adverse social, physical and economic environments that may contribute to worse health outcomes. Consequently, adolescents from low-income households are more likely to develop MetS because they lack access to information promoting or harming health and are at higher risk of experiencing unhealthy behaviors. These results indicate that public policy interventions focused on mitigating the adverse effects of low household income on health behavior are required because socioeconomic disparity is a problem that cannot be effectively overcome solely by individual efforts. This study has several limitations. First, it was not possible to filter out false reports as the analysis was based on data recorded in the self-report questionnaire. Second, quantitative data such as the amount or frequency of smoking and alcohol consumption were not obtained. Third, it was not possible to distinguish between those who had discontinued alcohol use and those who were current drinkers. Finally, the physical strength measurement method for estimating lack of physical activity was limited to HGS. Although previous studies reported that higher cardiorespiratory health is associated with lower MetS prevalence, this study did not measure aerobic exercise capacity due to the environmental limitations of a large-scale investigation [43]. Further studies are required to address these limitations. Notwithstanding these limitations, this study has the strengths of providing significant information on the relationship between household income and health behavior as a predictor of MetS in adolescents. This information can be helpful in improving public awareness that modifying health behavior in order to reduce the risk of MetS in adolescents from low-income households should be a priority. Conclusions In this study, low household income increased the risk of unhealthy behaviors such as smoking and alcohol consumption in both boys and girls, and was an important determinant of MetS risk along with low hand grip strength. These findings suggest that household income is another predictor of the prevalence of MetS in adolescents. Therefore, social interventions to prevent MetS among adolescents who are placed in an unfavorable socioeconomic environment is important, and efforts are needed to systematize national health education programs so that all adolescents can equally establish awareness of health-promoting behaviors.
2021-09-28T05:10:36.338Z
2021-09-01T00:00:00.000
{ "year": 2021, "sha1": "5860d5a44570f612f2eb27f2e48d9d4194c5feaf", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-9067/8/9/812/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5860d5a44570f612f2eb27f2e48d9d4194c5feaf", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
239110233
pes2o/s2orc
v3-fos-license
MR-guided spine interventions: time to get off the ground? In their recently published article, lumbar facet radiofrequency denervation for treatment of patients with chronic low back pain in an open 1.0 Tesla MRI system”, Böning and his research team evaluate the feasibility, safety, and efficacy of lumbar facet joint radiofrequency denervation (FRD) under MRI guidance, using an open 1.0 Tesla scanner (1). The study results confirm the clinical efficacy of the FRD procedure in terms of pain reduction at the medium-term follow-up, as demonstrated in the The use of the MR represents the peculiarity of this and opens up space for a series of important reflections. In their recently published article, "MR-guided lumbar facet radiofrequency denervation for treatment of patients with chronic low back pain in an open 1.0 Tesla MRI system", Böning and his research team evaluate the feasibility, safety, and efficacy of lumbar facet joint radiofrequency denervation (FRD) under MRI guidance, using an open 1.0 Tesla scanner (1). The study results confirm the clinical efficacy of the FRD procedure in terms of pain reduction at the mediumterm follow-up, as demonstrated in the previous literature (2)(3)(4). The use of the MR guide represents the peculiarity of this report, and opens up space for a series of important reflections. Spinal interventional radiology, thanks to the high procedural success, together with its minimal invasiveness and a high safety profile, has been gaining an increasingly wider field of applications, either for degenerative, neoplastic, and inflammatory pathology, with several techniques, procedures, implantable materials, and injectable drugs available (4-10). The need for real-time imaging guidance makes fluoroscopy a fundamental guiding method, and CT is another modality of choice, thanks to the high-spatial resolution sectional visualization (11). MR guidance provides a sectional visualization, with the advantage of having a significant profit in tissue contrast resolution. The latter is a major advantage, especially in cases where bone or soft tissue lesions poorly delineated by non-contrast CT scan have to be targeted, or in body districts where the anatomical characterization of nervous and vascular structures is a priority, even without contrast medium administration (12). An excellent compromise between the contrast resolution provided by MR imaging and the speed of a real-time guide can be given by ultrasound-MRI fusion navigation systems, quite used and widespread for biopsy guidance and lumbar fact joint injections. This US-MRI fusion approach performed at the CT gantry, and combined with this guide, could be an appealing solution for many procedures (13,14). Indeed, the MRI guidance for interventional radiology procedures, especially around the spine, is not an absolute novelty. In literature, there are reports on the use of the MRI guidance to carry out procedures such as nerve root injection, facet joint injection, epidural injection, facet joint neurotomy, and biopsies, dating back even to the last decade (15,16). Nevertheless, it seems that there has not been a rampant increase in the application of this procedure in recent years. There may be several critical factors to be assessed about this point. Definitely, the ergonomics of the MR environment, considering first and foremost the operating field space; from this point of view, open, low-field magnets provide greater handling and comfort. The disadvantage of these scanners (0.2 T) was mainly related to the low signalto-noise ratio, and the recent development of high-field scanners with wider gantry and high-field strength open magnets (1 T) may partially overcome these limitations (16). Furthermore, the sequences used are increasingly optimized to obtain quick imaging feedback with good needle/probe visualization. Another critical aspect is represented by the availability and costs of MR-compatible materials, not only regarding needles and probes, but also the instrumentation for the Editorial anesthesiologic support inside the MRI room. Further studies are also needed to assess operator and patient comfort and safety, as well as procedural times, compared to fluoroscopic and CT guided procedures. The absence of ionizing radiations using MR guidance is an undoubted advantage. Although this is a priority and tangible benefit for fertile women, children, and young subjects submitted to multiple procedures, there is a need to evaluate the cost-benefit ratio in elderly patients. All these factors probably led to an unavoidable delay in the foundation of universal expertise, and therefore the rapid diffusion and availability of these procedures, advantageous from many points of view and with enormous clinical potential. Acknowledgments Funding: None. Ethical Statement: The authors are accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. Open Access Statement: This is an Open Access article distributed in accordance with the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0), which permits the noncommercial replication and distribution of the article with the strict proviso that no changes or edits are made and the original work is properly cited (including links to both the formal publication through the relevant DOI and the license). See: https://creativecommons.org/licenses/by-nc-nd/4.0/.
2021-10-21T15:11:00.523Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "46685908a4f06e57966f908a5c473050be37c74e", "oa_license": "CCBYNCND", "oa_url": "https://atm.amegroups.com/article/viewFile/81444/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "db9fa41a621e723ba26cfb29bfdd5afe0fc40ea5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
124196051
pes2o/s2orc
v3-fos-license
Transport coefficients in the 2-dimensional Boltzmann We show that a rarefied system of hard disks in a plane, described in the Boltzmann-Grad limit by the $2$-dimensional Boltzmann equation, has bounded transport coefficients. This is proved by showing opportune compactness properties of the gain part of the linearized Boltzmann operator. Introduction The interest in low dimensional systems has recently increased because of their relevance in the study of nano-structures. One of the questions arising in this context is whether of not the transport coefficients are well defined. It is a common point of view, supported by experiments and numerical results, (see for example [4,7,8]) that heat conductivity may become infinite. Theoretical arguments, based on the Green-Kubo formula and the slow decay of time self-correlation of momentum and energy fluxes, seem also to support such a conclusion. Examples where the (un)boundedness is proved are provided by stochastic lattice particle systems (see [1] and references quoted therein). The deterministic continuous systems are out of reach of the present mathematical techniques, but for the case of rarefied gases. In this case, the Boltzmann equation has been proved to be a good approximation of the time behavior of the system in the Boltzmann-Grad limit at least for short times [5]. It is not obvious that the such a limiting procedure does not destroy the long time tails in the correlations. In this short note we will discuss this question and show that the transport coefficients are indeed bounded in the Boltzmann-Grad limit. In low dimension the validity of the Boltzmann equation for hard spheres in a thin layer has been proved in [3], by considering a three dimensional system with one side much smaller than the others. It has been proved that, as long as the smaller side is still large compared to the interaction length, the limit equation is the Boltzmann equation with two dimensional positions and three dimensional velocities. In this case the transport coefficients, already computed by Maxwell [6], are bounded. However, the argument in [3] does not apply when the small side is of size comparable with the interaction length. In this case, it is shown in [3] that, in the linear case (Lorentz gas), the limiting behavior is not described by the linear Boltzmann equation. One can then consider a strictly two dimensional system, with both two dimensional positions and velocities, namely a system of hard disks moving on a plane. The Lanford proof [5] works also in this case and the limiting equation is the Boltzman equations with positions and velocities in R 2 . Therefore one can try to compute the transport coefficients for this system in the Boltzmann-Grad limit by means of the Boltzmann equation. The coefficient of transport are obtained by solving an integral equation of the form where L is the Boltzmann collision operator linearized around a Maxwellian and g are suitably chosen functions of the velocities. The linearized Boltzmann operator has the form where ν is a multiplication operator and K is an integral operator. Since there is no small parameter in the equation, its solution is based on the Fredholm theory [9]. This is well known when the velocities are in R 3 (see for example [2]), while in dimension two, this requires some analysis. In next section we present the explicit expression of the kernel of the operator K for hard disks, while in Section 3 we show its compactnes in a suitable space. Then from the Fredhol alternative, we conclude that the equation (1.1) can be solved in the suitable space and as a consequence the transport coefficients are bounded. Estimates on the kernels The Boltzman equation for the probability density f (x, v, t) on the phase space R d × R d is written as In the following, the dimension of the position and velocity space d, will be eventually fixed to 2. The Boltzmann collision operator Q is defined as: Moreover, ω ∈ S d−1 , the surface of the unit sphere in R d : S d−1 = {ω ∈ R d | |ω| = 1}. We will use the following equivalent expression for Q: In fact, since for any ψ(k) with η(x) the Heaviside function, given by be the standard Maxwellian such that Q(M, M ) = 0 and set The linearized Boltzmann equation is obtained by plugging (2.6) into (2.1) and neglecting quadratic terms. It reads: withL the linearized Boltzmann operator defined aŝ The operatorL has the following structure: the * -product denotes the convolution product and ω 1 being any fixed unit vector and |S n | the surface of unit sphere in R n+1 . Indeed, Similarly, one can show that The operatorsL 1 andL 2 are integral operators of the form The explicit expression ofK i will be given in the proof of Proposition 2.1 below. The transport coefficients are computed in the Chapmann-Enskog expansion by solving the integral equationL ϕ = g, (2.19) and one has to choose, with α, β = 1, . . . , d to compute the viscosity coefficient and to compute the heat conductivity. It is convenient to symmetrize the operatorL by settingψ The operator so defined is symmetric in L 2 (R d ). Since the operator L has a non trivial null space, spanned by the function M , in order that the equation (2.22) has solutions, the right hand side h has to be orthogonal to the null space, a condition which is fulfilled by the functionsh = √ M g with g given by (2.20) and (2.21). Therefore it only remains to establish the sufficient conditions to apply the Fredhom alternative theorems [9]. We set ψ = √ νψ,h = √ νh and (2.22) becomes and We are interested in the case d = 2. The explicit expressions of the kernelsK i (v, w) and K i (v, w) in d = 2 are given in the following: 32) and a = b = 2 π . Moreover , (2.33) Proof. The proof of Proposition 2.1 is given in Appendix The following estimates are based on the above explicit formulas. As a consequence of Proposition 2.1 and Lemma 2.2, we have the following estimate for the kernels K 1 and K 2 : There are constant C 1 and C 2 such that Proof. It is enough to note that 1 + min{|v|, |w|} (1 + |v|)(1 + |w|) ≤ 1. (2.49) Next proposition contains the essential estimates to prove the compactness of the operators L 1 and L 2 : The functions h i , i = 1, 2, are bounded, monotone and go to 0 as |v| → ∞. Compactness To prove the compactness of the operators L 1 and L 2 , we introduce, for any R > 0, We introduce the following kernels for i = 1, 2: and denote by Q R i , S R i and P R i the corresponding operators on L 2 (R 2 ). Previous estimates show that the operators S R i and P R i go to 0 as R → ∞ in the uniform operator norm. To prove this we need to show the estimates with · the L 2 (R 2 )-norm and o(R) → 0 as R → ∞. Let φ(v, w) ≥ 0 be one of the these kernels. We have: Therefore, setting for any f ∈ L 2 (R 2 ) we have Let us now consider the case φ(v, w) = S R i . Remember the definition of g i in Proposition 2.3 The compactness of the operators Q R i , for fixed R is standard (see [9], pages 229-231). Indeed, from Proposition 2.3 the kernel Q 1 is a continuous function on a bounded set, hence uniformly continuous and this is sufficient for the compactness. As for the kernel Q R 2 we have to take care of the singularity v = w. To do this we proceed in the same spirit as before. Since now R is fixed, we drop the apex R and introduce a (smooth version) of the characteristic function χ ε (v) = η(ε − |v|) and χ c ε (|v|) = η(|v| − ε) and write Clearly for the part χ c ε (|v − w|)Q 2 (v, w) we can use the same argument used for Q 1 . Now we show that χ ε (|v − w|)Q 2 (v, w) goes to 0 as ε → 0 in the uniform operator norm. Indeed, by the same argument used to obtain (3.6), we have as ε → 0. Therefore, the exponent in the exponentials is exactly A(v, w) and this concludes the proof of Proposition 2.1.
2014-10-01T00:00:00.000Z
2013-11-01T00:00:00.000
{ "year": 2013, "sha1": "f27a7f971e23e781cf6a6a9490e356cb415935ae", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3934/krm.2013.6.789", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "6be45dd2ed9eb2ee9b8f53f722a9fc5a2074f22e", "s2fieldsofstudy": [ "Physics", "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
229210185
pes2o/s2orc
v3-fos-license
Supervised Scene Illumination Control in Stereo Arthroscopes for Robot Assisted Minimally Invasive Surgery Minimally invasive surgery (MIS) offers many advantages to patients but it also imposes limitations on surgeons ability, as no tactile or haptic feedback is available. From medical robotics perspective, visualizations issues specific to MIS such as limited field of view and the lack of automatic exposure control of the surgical area make it challenging when it comes to tracking tissue, tools and camera pose as well as in perceiving depth. Lighting plays an important role in 3D reconstruction and variations due to internal illumination conditions are known to degrade vital visual information. In this work, we describe a supervised adaptive light control system to solve some of the image visualization problems of MIS. Our proposed method is able to classify underexposed and over-exposed frames and adjust lighting condition automatically to enrich image quality. Our method uses support vector machines to classify different illumination conditions. Visual feedback is provided by gradient information to assess image quality and justify classifier decision. The output of this system has been tested against two cadaver knee experiment data with an overall accuracy of 97.75% for under-exposed and 89.11% for over-exposed classes. Hardware implementation of this classifier is expected to result in adaptive lighting for robot assisted surgery as well as in providing support to surgeons by freeing them from manual adjustments to lighting controls. haptic and tactile sensing hence, vision is the only source of information available to surgeons. In other words, surgeons have to operate on 3D surface based on 2D images projected on screen. This combined with a limited field of view adds further complexity to MIS. In recent years, many studies have aimed at increasing the accuracy for MIS [1]- [5]. It is usually well established that vision system plays an important role and robust vision is essential to avoid unintentional injury during MIS. Some recent studies suggest that simply extending the visualization of surgical area is able to significantly increase accuracy and reducing operating time. However, 3D reconstruction of the surgical scene is desired to further improve surgical outcomes [1], [6], [7]. Furthermore, 3D reconstruction is expected to have a great impact on the robotic-assisted surgery, such as tissue -tool tracking, and pose estimation. Stereoscopic vision, simultaneous localization and mapping (SLAM) and photometric stereo are widely used to reconstruct 3D surfaces along with the alternative methods such as scanning technology [40]- [43], [46]. All these methodologies often encounter difficulties primarily due to the complex optical nature of anatomic constructs and also due to the lack over the lighting control [1], [46], [47]. It is also true for mostly soft tissue such as ACL. Saturated and dark pixel contain little to no color and texture information. Images are taken from the video sequences of cadaver knee experiment. Point A represents saturated pixels caused by the soft tissue (near object). Point B represents underexposed part caused by the bone cavity or due to occlusion caused by tissue arrangement (far object) and point C represents shadow parts (underexposed). other computer vision algorithms such as segmentation [48]. Additionally, different MIS process encounters different level of constraints. As an example, arthroscopy is limited to dimension constraints due to fixed bone-joint space where as laparoscopic MIS can access larger dimensions. A small change to light intensity can lead to an image becoming overor under-exposed and it is widely encountered problem in arthroscopy. Saturated image parts contain almost zero information, no features, and in situations like this stereo correspondence search converges to an ambiguous result that lead to fail the whole 3D-reconstruction process. Partial reconstruction can be a possible solution when light intensity is changed to a lower value. Similar problem also arises when a scene contains low light intensity that is often marked as an underexposed or dark image. Apart from the 3D reconstruction process, other vision related approach such as segmentation and registration, localization and pose estimation also encounter same problems that compromises overall performance the whole system [1]. In conclusion, the light source or lighting condition has a great influence on the MIS visualization procedure. Surgeons or robotic platforms require a stable vision of the internal surgery scene. In knee arthroscopy, the presence of the diffuser (water), its anatomical structure, and camera motion are the most common factors that restrict uniform distribution of light intensity. Moreover, the control system must respond as fast as possible due to the camera motion. For a robust surgical vision this fundamental problem has to be solved. II. PROBLEM FORMULATION Considering anatomical structure, in some MIS procedures such as in knee arthroscopy, tissues further away from the arthroscope often receive less light that causes formation of dark pixels (under exposed) whilst near objects encounter pixel saturation (over exposed). This situation is depicted in Fig.1. Although the overexposed and the underexposed events frequently occur in all MIS procedures, in current work this problem is analyzed in the context of the knee arthroscopy. Owing to the presence of curved bone structure and narrow spaces, different areas of a surgical scene could receive different levels of illumination; which result in over and under exposures. Example of this scenario is shown in Fig.2. Moreover, some tissues reflect more light than others. A representative example is the Anterior Cruciate Ligament (ACL) that often causes pixel saturation. In addition, the distal parts of the anatomy, e.g. bone and soft tissue joints, create a more complex situation as depicted in Fig. 1-(a) where some parts remain over exposed and under exposed in the same frame. While the total scene intensity may remain the same, increasing the light intensity saturates parts that are closer to camera and decreasing light increases underexposed problem for the distant parts. Automatic methods to balance both over exposed and under exposed fractions with minimum frame loss are therefore highly desired. Light intensity is controlled manually and surgeons have to adjust light intensity multiple times that interrupts with the surgery process flow. Clearly, manual approach to lighting control is not feasible for robotic-assisted surgery. Robotic imaging systems would require robust lighting control for depth estimation. In this work, we present a supervised light control system that imitates like an endoscopic camera illumination system and provides real time feedback to improve stereo system accuracy and robustness by recommendations to adjust scene illumination condition to enhance scene features and image contrast even if the scene is considered at normal exposure. We have trained and validate our system with two distinct cadaver experiments. III. RELATED WORK At present, no work has been found so far that resolves the illumination problem automatically in arthroscopic or endoscopic imaging systems. A big volume of articles has been found to control camera exposure time by controlling the amount of light that falls on an image sensor. It has some similarities related to MIS visualization problems stated above. However, it is worth noting that most of these approaches are based on static images. In this context, camera response versus scene light intensity is evaluated through the equation described below. where, is the sum value of imaging signal, K is a constant, L is the brightness of the object, G is the gain of the automatic gain control (AGC) circuit, S represents shutter speed of the camera. The automatic exposure bracketing based method is widely used to adjust exposure time [10], [12], [13]. Pourreza et.al presented an automatic exposure selection method based on exposure bracketing [10]. It requires n number of image frames with different exposure values. In their method, they used two building blocks namely, scene analysis and exposure time. The clustering method was used to divide the image frame into three parts and exposure time was calculated through the camera response function. Radiometric based exposure control has been proposed by Kim et.al. for outdoor scene analysis [11]. The radiometric camera response function is individual to the camera sensor [14], [15] that often generates camera specific solutions. However, this method also requires a set of still images. A statistical measure such as mean, mean sample value, etc. are used to intercept scene intensity distribution for normal exposure images. Impact of lighting on image formulation is significant, some literatures address light noise conditions with the recommendation to the use of filters, artificial intelligence algorithms and new optoelectronics devices [44], [45]. Alternatively, neural networks have been studied to estimate chromaticity of image intensity [19]. Geoffroy et.al. proposed a method that uses convolutional neural network (CNN) to estimate illumination intensity for High Dynamic Range images. In their work CNN was used to predict illumination for low dynamic range images. Usually a CNN network requires a large number of data sets for training. Additionally, deep learning requires a dedicated number of processing units to achieve real time performance [20]. Some other methods assess image quality at different exposure values [21]. Images having different exposure values can be achieved by transforming an image using gamma correction or tone mapping curve [21], [22]. In MIS context these approaches are not always valid. Camera motion, tissue movement, non-uniform illumination, surface curvature and distance between camera and surface limits the utility of almost all static image-based approaches. In endoscopic imaging process we need to minimize the frame loss, whereas exposure bracketing methods always have a number of redundant frames and camera navigation process has to remain in a fix position for multiple times for image stabilization. Moreover, most of the literature does not consider underwater or turbid environment encountered in MIS. In this context, scattering is a one of the most considerable problem where background light saturates image sensor making it difficult to visualize the object of interest [16]. IV. METHODOLOGY We defined the illumination control problem as a classification problem here. Therefore, the aim of this work is to identify whether the classification and regression methods can infer requirements specific to the level of illumination desired in visualizing the surgical scene in terms of light distribution. The whole scene light conditions are grouped into three classes, as follows; I) overexposed, II) underexposed, and iii) normal. Usually the first two scene categories are overlapped. This method provides a solution that establishes robust illumination condition for the stereo and the monocular endoscope. We define 'area of interest' as an overlapped region between the stereo image pair captured by the stereo endoscope developed in our lab. Area of interest plays a significant role on stereo matching process which is the leading mechanism of stereo vision. It is more reasonable that the area of interest preserves sufficient image context. Image pixel values are grouped into three groups and different threshold values were defined to serve this purpose. Usually threshold values are set by experiments [37]- [39]. In this work, eight-bit gray scale threshold values are defined empirically through the observations of intensity responses of internal anatomy using vision processing algorithm such as rank transformation and stereo matching followed by the human visibility limits. Three thresholds are defined as follows; Here, I(x,y) represents pixel intensity of the image coordinate x, y. The maximum intensity of a saturated pixel in eight-bit gray scale level is 255. However, when a pixel is about to be saturated it has the same effect on the human visual system or on the vision processing methods as it is depicted by Fig. 3. The underexposed pixels also have the same effect. Research has found that in middle gray tone pixels receive optimal exposure value which is 128 in eight-bit gray scale level [39]. In order to identify optimal value of these two extremes, arthroscopy video sequences are segmented in to two slices where applicable. Slices are marked as an overexposed and underexposed. A range of gray level value then applied on the whole video sequences in order to find the optimal gray level intensity values for these two extremes. When a certain gray level intensity achieved a maximum segmentation accuracy, then that gray level value is selected as a threshold value. In this work, intersection over union (IoU) is used to evaluate segmentation accuracy at different threshold values. Underexposed, overexposed and normal indices describe the overall image statistical information. Underexposed index expresses relative underexposed pixel amount over the whole image. Similarly, overexposed and normal exposed indices express relative amount of over and normally exposed values over the whole image. These three indices define the whole dataset in 2D space. Support Vector Machine (SVM) is used with radial basis function (gaussian kernel). SVM was originally proposed by V. N. Vapnik [8]. SVM is widely used to address classification as well as regression problem as it is described in literature [49]. It is a binary classifier but in conjunction with other SVM it can perform multi classification problem. It classifies data with possible highest inter-class distance. Unlike multi-layers perception such as deep learning networks, SVM can be trained with limited dataset. It consumes fewer hardware resources compared to deep network, yet it provides efficient classification result in our problem domain. Instead of implementing multi-layer perception and conditional probability computation, SVM provides support vectors that are used to compute class distance from the hyperplane in run time. Moreover, Light control system has to operate in real time with minimal frame lost. Trained SVM has these strong points. The most complex problem is to identify overexposed and underexposed scenes. To classify these total three lighting condition (underexposed, normal, overexposed), SVM with radial basis function (RBF) is applied that is stated in equation (3). The advantage of the radial basis function is that for given sufficient data, it is able to define an efficient hyperplane discriminant among the nonlinear data set. It does not have saturation problem. The nature of lighting distributions is non-linear as depicted in Figure 6, we chose nonlinear kernel function for the proposed classifier. Comparative study shows radial basis function outperforms with nonlinear data among others such as polynomial kernel [50], [51]. Random subset from shuffled sample dataset is compared against different non-linear kernel before RBF kernel is selected for this approach. Mean accuracies of cross validation dataset are 0.9559448, 0.82038217, and 0.98152866 for the SVM kernel Sigmoid, Polynomial and Radial basis function respectively. Optimization takes places to establish classification and marginal error tolerance depicted in Figure 6. Feedback system can justify classifier outcomes in real environment. Under sufficient lighting conditions image contains more context features, such as edges. Based on this, feedback system evaluates present image context and past image context. That can be achieved by evaluating image features such as edges, corner. In underwater environment, it has been observed that two consequent image frames show a subtle . Area marked with point A,O are overexposed region as a consequence's depth discontinuity is observed. Hence, Structural Similarity index (SSIM) and Peak noise to ration (PSNR) shows poor result where the right image (b) does not experience overexposed pixels and its SSIM is 0.58. (c) Similarly, segmentation process shows poor outcomes with overexposed area described by point C where with well exposed image denoted by point D on right image shows better accuracy. Accuracy metric intersection over union (IoU) for overexposed image is 0.58 where well exposed achieved 0.94. gradient change. Therefore, an empirical threshold is required that calibrates the system. Figure 4 presents our approach to define individual threshold values across all image conditions. For each frame different level of gray scale intensity values are used to achieve binary segmentation. The outcomes of each threshold value are then compared to the ground truth mask and then IoU (Jaccard index) metrices are calculated. In the next step, overexposed index, underexposed index and normally exposed index is calculated that are used by the SVM classifiers, as detailed below in Figure 5. 1st SVM classifier determines whether scene need more illumination or less. If scene requires more illumination second SVM classifier determines whether it actually needs more illumination or maintain the current illumination condition. Figure 6. shows hyperplane and optimized SVM (RBF) margin for over and under exposed conditions and output to identify light intensity changes between two classes. Left most images (a) represents the original images captured by the stereo endoscope, the next column (b) represents the segmented images and the column (c) represents corresponding mask for each stereo image pair. Over exposed pixels are marked with green color on the segmented images and those pixels are manually selected for each video frames. (d) represents average IoU curve and the optimal value is obtained at intensity level ∼226 value is selected. Likewise, Set (II) represents underexposed threshold selection process. Left most images (a) represent the original images captured by the stereo endoscope, the next column (b) represents the segmented images and the column (c) represents corresponding mask for each stereo image pair. Underexposed pixels are marked with blue color on the segmented images and those pixels are manually nominated for each video frames. (d) represents average IoU curve and the optimal value is obtained when intensity level ∼101 is selected. Fig. 5. System overview of light control system. Stereo pair is cropped into areas of interest. In the next step, overexposed index, underexposed index and normally exposed index are calculated that are used by the SVM classifiers. In this work, any scene context changes are referred as a motion. Pixel wise comparison is futile due to illumination variation, displacement and rotations. Additionally, frame similarity index such as structural similarity index (SSIM) may not provide a good result in this situation. Moreover, most of these approaches are based on gray level image statistics such as mean and variance. To compare two pixels under different illuminations, in this article census transformation in grayscale is used. V. TRAINING For training purpose, two cadaveric knee experiment video data are used. Each frame is manually labeled. Scene is marked as an overexposed image if it contains lack of image features due to pixel saturation otherwise it is label as an underexposed image. If the scene is classified as an overexposed image, it is then further marked as a normally exposed image if it contains enough features and contains minimum image saturation otherwise overexposed image due to dark pixel amount. Additionally, if image does not contain saturated pixels and increasing light intensity enhances image quality then it is labeled as an underexposed image even though it may not contain enough dark pixels. This training data set is used to train the second SVM. Further SVM regularization is recommended to increase classifier accuracy. Initially approx. 2950 frames are used to create training dataset. Optimization process is performed over RBF gamma and C parameter to fit optimal classification boundary curvature and the penalty value for misclassification. The optimal gamma value is 0.01 and optimal C value is 1 for the classifier up (light intensity increase) and no change (light intensity no change) classes. Similarly, for the classifier of the intensity change up (increase) and down (decrease), the method shows optimal performance on gamma equal to 0.001 and C equal to 100. The training accuracy are approx. 97.75% and 89.11%. Higher gamma value is discarded to avoid data overfitting problem. During the optimization process minimization of classification error is more considered rather than the marginal error. A. Data Collection During our cadaveric experiments, stereo video sequences of knee Arthroscopy are collected. Stereo endoscope has manual light control unit. During arthroscopy surgeon changes light intensity several times in different part of the knee cavities. Those video sequences are used to validate model. Moreover, one set of dedicated video sequences has been recorded that contains a linear light intensity changes from low to high in some position during the camera trajectory. Figure 7 presents the clinical context and a representative location of arthroscope within the knee cavity during MIS. Details of this stereo camera prototype developed in our lab is shown in Figure 8. A commercially available traditional arthroscope is also shown in this figure. Our stereo camera prototype is comprised of following elements; VI. DESIGN OF THE ENDOSCOPIC STEREO CAMERA • a pair of muC103A cameras together with their C8262 UVC interface modules, • two white LED (T0402W) for illumination, • 3D printed camera head for mounting cameras and the LED, • 3D printed box for containing the wires and the circuits, and • insertion tube. muC103A is a CMOS technology camera sensor from OmniVision which can stream video at 30 fps with 400 * 400-pixel resolution and the field of view in 120 deg. The tip diameter of 1.52 mm makes this camera ideal for the endoscopic applications. The two cameras were mounted on the 3D printed head manually, hence some degree of misalignment between the two cameras was unavoidable. The T0402W LED consumes less than 20 mA at 3.3 V and is 1 mm wide. The baseline for the cameras was set 1.52 mm (distance between the optical center of two cameras is 1.52 mm). By considering the 87.5-degree field of view of the muC103A, the amount of overlap between the stereo pair is approximately 79% on 10 mm away from the stereo cameras. VII. RESULT Total 3,350 video frames are selected from our cadaveric knee experiments. Distorted and contaminated frames are discarded. During this training phase we select 1/3 of frames from each video file that contains different levels of difficulties such as shadow, bone joint and cavity, close contact between the camera and tissue etc. We use those frames for cross validation purposes. Table I shows the all outcomes during our training phase. During the test phase, fully trained classifier is tested against different arthroscopy video data along with cross validation data set. In some video frames different stereo camera is used. All the SVM graphs are created using scikit [29]. This methodology is compared against the camera exposure control methods in order find a benchmark. Camera exposure control system limits the amount of light that falls on digital camera sensor in order to achieve good quality of images. In this evaluation, increase of exposure time simulated as an increase of light intensity, decrease of exposure time as a decrease of light intensity, and no effect of exposure time interpreted as a no change of light intensity. Compared methods are mainly based on histogram analysis and scene quality analysis. However, compared to natural images MIS images has a number of limitations such as camera locomotion, distance between the camera surface, and no uniform light distribution etc. Algorithm 1 is based on Gradient Information [21], [34]. Gradient information is computed as follows; The potentiometer shown by the red arrow is used to adjust the LED intensity. The endoscope circuits and extra wiring are contained inside the black box and connected to the computer using two USB cables. In order to calculated gradient information of images having different exposure values, gamma function is used as proposed by the article. The exposure value is selected based on the following criteria; where M is the sum of gradient information and γ mimic exposure time value. The aim is of this method is to maximize gradient information. Algorithm 2 is based on third moment of image intensity histogram [35]. Third moment is calculated as follows proposed by the authors as follows; where M * N represent the image size and q (z k ) is the total number of frequencies. Algorithm 3 is based on middle tone distribution of image intensity histogram [36]. The histogram is divided in to five part and mean sample value (MSV) is used that described as follows; where, x i is the sum of the values in region i and i denotes the region of the histogram. According their article, the image is correctly exposed when μ ≈ 2.5 [36]. Confusion matrices for lighting control in stereo frames and monocular frames is shown in Figure 9. Model occurrences are our manually class label and predicted occurrences are endoscopic illumination controller outcomes. For class intensity increase, according to the normalized confusion matrix it shows 97% accuracy. Most significant error in that case is 0.009% that is decrease light intensity. Similarly, decrease intensity event achieved 97.8% classification accuracy and no change in light intensity event achieved 55.6% classification accuracy. No change light intensity is hard in stereo situation. In our test stereo pair image encountered different illumination in most of situation caused by shadow, surface curvature of internal anatomy etc. Lighting control outcomes for over-exposed, under-exposed and normal exposed classes are presented in Figure 11. Upward arrow, downward arrow and dotted line annotations to stereo images indicate controller output for each case. Figure 12. compares the outcome of methods evaluated. Our proposed method received average error of 7.37% and 4.56 % for stereo endoscope and monocular endoscope respectively. All the methods fail to identify no change event for monocular endoscope and except Algorithm 1. It receives 32.7 % error to identify no change error. Where the proposed method is able to detect no change event with 71.27% average accuracy (16.36% in error in stereo and 41.1% error in monocular). Considering overall accuracy to control illumination event the proposed method outperforms state-of-the-art exposure control methods. VIII. CONCLUSION We have explored the feasibility of support vector machine classifier for endoscopic camera light intensity control problem in real time. Our results offer good prospects in inferring challenges imposed in surgical scene visualization into a scene classification problem. Confusion matrices represents the overall performance of our implemented illumination controller. However, accuracy decreases for the 'no change' class category. In knee arthroscopy this event occurs rarely hence, very few training data was available for train this category. Additionally, in stereo vision system, two viewpoints receive different level of illumination distribution mostly due to the anatomical structure hence, this specific class category encounters more error. A set of training video sequences under slowly varying light intensity at every possible intensity value is recommended for this approach for further improvement. Image patch-based similarity index under radiometric, rotation and illumination changes are expected to further enhance feedback in real time. It might be useful to combine patch similarity or dissimilarity index along with the degree of changes provided by content indexing methods. Maximum similarity regions with positive improvement of those reason can infer an intensity event as a true decision, as investigated in this work.
2020-11-12T09:02:06.334Z
2021-05-15T00:00:00.000
{ "year": 2021, "sha1": "02b9bd57741a96ca651f7164f2b809ae95169a90", "oa_license": "CCBYNC", "oa_url": "https://eprints.qut.edu.au/207559/1/Supervised_SVM_for_MIS_exposure_control.pdf", "oa_status": "GREEN", "pdf_src": "IEEE", "pdf_hash": "e9973cd0129946d7a5b25f3ff77a4783a46fc413", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
231855604
pes2o/s2orc
v3-fos-license
Impact of the disk thickness on X-ray reflection spectroscopy measurements In a previous paper, we presented an extension of our reflection model RELXILL_NK to include the finite thickness of the accretion disk following the prescription in Taylor&Reynolds (2018). In this paper, we apply our model to fit the 2013 simultaneous observations by NuSTAR and XMM-Newton of the supermassive black hole in MCG-06-30-15 and the 2019 NuSTAR observation of the Galactic black hole in EXO 1846-031. The high-quality data of these spectra had previously led to precise black hole spin measurements and very stringent constraints on possible deviations from the Kerr metric. We find that the disk thickness does not change previous spin results found with a model employing an infinitesimally thin disk, which confirms the robustness of spin measurements in high radiative efficiency disks, where the impact of disk thickness is minimal. Similar analysis on lower accretion rate systems will be an important test for measuring the effect of disk thickness on black hole spin measurements. INTRODUCTION Relativistic reflection features are common in the Xray spectra of Galactic black holes and AGN (Fabian et al. 1989;Tanaka et al. 1995;Nandra et al. 2007). They are thought to be produced from illumination of the inner part of an accretion disk by a hot corona Risaliti et al. 2013). In the rest-frame of the gas in the disk, the reflection spectrum is characterized by a soft excess below 2 keV, narrow fluorescent emission lines in the 1-10 keV band, in particular the iron Kα complex at 6.4 keV for neutral iron and up to 6.97 keV for H-like iron ions, and a Compton hump peaked at 20-30 keV (Magdziarz & Zdziarski 1995;Ross & Fabian 2005;García & Kallman 2010). The reflection spectrum of the whole disk detected far from the source is the result of the sum of the reflection emission at different points on the disk and relativistic effects occurring in the strong gravity region near the black hole (gravitational redshift, Doppler boosting, light bending) (Fabian et al. 1989;Laor 1991). The analysis of the relativistic reflection features in the spectrum of a source can thus be used to study the accretion process in the inner part of the accretion disk, measure black hole spins, and even test fundamental physics (Brenneman 2013;Reynolds 2014;Bambi 2017). The last decade has seen tremendous progress in the analysis of these reflection features in the spectra of accreting black holes, thanks to new observational facilities and more sophisticated theoretical models. However, all the available theoretical models still have a number of simplifications , so caution is necessary in any attempt to infer precision measurements of the properties of these sources. Moreover, the next generation of X-ray observatories, such as eXTP (Zhang et al. 2016) or Athena (Nandra et al. 2013), promise to 1 Center for Field Theory and Particle Physics and Department of Physics, Fudan University, 200438 Shanghai, China. †E-mail: bambi@fudan.edu.cn 2 Ulugh Beg Astronomical Institute, Tashkent 100052, Uzbekistan 3 Theoretical Astrophysics, Eberhard-Karls Universität Tübingen, D-72076 Tübingen, Germany provide unprecedented high-quality data, and more sophisticated synthetic reflection spectra will be required to fit their data. There are thus important efforts among the community to further develop the current reflection models. In the past 5 years, our group has developed the model relxill nk , which is an extension of the relxill package (García et al. 2013(García et al. , 2014 to non-Kerr spacetimes. The key-feature of relxill nk is that it does not employ the Kerr solution of the Einstein Equations as the background metric. Synthetic reflection spectra are instead calculated in a more general background that includes the Kerr metric as a special case. The spacetime is characterized by some "deformation parameters". The latter vanish for the Kerr metric and can thus be used to quantify possible deviations from the Kerr geometry. From the comparison of reflection-dominated X-ray data with synthetic reflection spectra, we can constrain the values of these deformation parameters and thus test the Kerr black hole hypothesis . In most of our past studies, we have used the parametric black hole spacetime suggested in Johannsen (2013), which is not a solution of a specific theory of gravity but an ad hoc deformation of the Kerr metric for these kinds of studies. However, the model can also employ theoretically motivated black hole solutions (Zhou et al. 2018;Zhu et al. 2020;Zhou et al. 2021). relxill nk has been used to analyze a number of reflection dominated spectra of Galactic black holes and AGN, and in some cases we were able to derive much stronger constraints than those found with other electromagnetic and gravitational wave techniques (Zhang et al. 2019;Tripathi et al. 2019Tripathi et al. , 2021Tripathi et al. , 2020c. However, more and more precise measurements necessarily need to be more and more accurate, and this requires continuing the development of the model Riaz et al. 2020c). In Abdikamalov et al. (2020), we implemented in relxill nk an accretion disk of finite thickness following the prescription proposed in Taylor & Reynolds (2018a). We still have a Novikov-Thorne accretion disk perpen-dicular to the black hole spin and with the inner edge at the innermost stable circular orbit (ISCO), but -unlike all the currently public relativistic reflection models assuming an infinitesimally thin disk -the disk has a finite thickness, which increases as the mass accretion rate onto the black hole increases. In the present paper, we apply that model with a disk of finite thickness to fit the 2013 NuSTAR+XMM-Newton data of the supermassive black hole in MCG-06-30-15 and the 2019 NuSTAR data of the Galactic black hole in EXO 1846-031. These are high-quality data and have already been analyzed with the standard relxill nk with an infinitesimally thin disk, providing among the most precise and accurate constraints of the deformation parameters of the Johannsen metric (Tripathi et al. 2019(Tripathi et al. , 2020c. Moreover, MCG-06-30-15 is a supermassive black hole observed from a low viewing angle, while EXO 1846-031 is a stellar-mass black hole observed from a very high inclination angle. They are thus two data sets particularly suitable for testing the development of our model. The paper is organized as follows. In Section 2, we briefly review our reflection model with an accretion disk of finite thickness. In Section 3, we analyze with our model the 2013 NuSTAR+XMM-Newton observations of the supermassive black hole in MCG-06-30-15. In Section 4, we fit the 2019 NuSTAR data of the Galactic black hole in EXO 1846-031. We discuss our results in Section 5. ACCRETION DISK OF FINITE THICKNESS In Abdikamalov et al. (2020), we extended our reflection model relxill nk to include the accretion disk geometry proposed in Taylor & Reynolds (2018a). Here we briefly review the set-up. We consider a geometrically thin and optically thick accretion disk with the mid-plane orthogonal to the black hole spin angular momentum. The disk is radiation dominated and its pressure scale height is given by (Shakura & Sunyaev 1973) where ρ = r sin θ is the pseudo-cylindrical radius, r ISCO is the radius of the ISCO and in this paper it is always assumed to be the location of the inner edge of the accretion disk,ṁ ≡Ṁ /Ṁ Edd is the Eddington-scaled mass accretion rate, and η is the radiative efficiency defined as η = 1 − E ISCO with E ISCO the specific energy of a test particle at the ISCO. The surface of the accretion disk is determined by the function z(ρ) = 2H and all matter of the disk at a given pseudo-cylindrical radius ρ has the same angular velocity. In Eq. (1), η and r ISCO depend on the spacetime metric. For a given spacetime metric, the Eddington-scaled mass accretion rate regulates the thickness of the accretion disk. Fig. 1 shows the impact ofṁ in the Kerr spacetime for different values of the black hole spin parameter. In the model suite relxill nk, relativistic effects and disk structure are included within the formalism of the transfer function (Cunningham 1975;Speith et al. 1995;Dauser et al. 2010). The observed spectrum can be written as where I o (E o , X, Y ) is the specific intensity of the radiation with energy E o on the observer's screen with the Cartesian coordinates X and Y , dΩ = dXdY /D 2 is the element of the solid angle subtended by the disk image in the observer's screen, and D is the distance of the source from the observer. Using the Liouville's theorem (Lindquist 1966), we write I o = g 3 I e , where g = E o /E e is the redshift factor between the photon energy measured on the observer's screen, E o , and the photon energy at the emission point in the rest-frame of the gas in the disk, E e . Eq. (2) can be rewritten as where R in and R out are, respectively, the inner and the outer edges of the accretion disk, r e is the radial coordinate of the emission point in the accretion disk, f is the transfer function defined as |∂(X, Y )/∂(g * , r e )| is the Jacobian, θ e is the emission angle in the rest frame of the gas, and g * is the relative redshift factor defined as where g min = g min (r e , ι) and g max = g max (r e , ι) are the minimum and maximum values of the redshift factor g for fixed values of emission radius r e and the observer's viewing angle ι. With the formalism of the transfer function, we can easily separate the calculations of the spectrum at the emission point in the rest-frame of the gas in the disk and the effects related to the disk structure and the spacetime metric that modify such a spectrum and lead to that observer far from the source. This separation enables us to quickly calculate the observed reflection spectrum from the tabulated transfer functions without having to recalculate all photon trajectories, which is quite a time consuming step. Since the accretion disk has now a finite thickness, a region of the inner part of the accretion disk may not be visible to the distant observer. For the nonvisible points of the accretion disk, the transfer function vanishes. The transfer function and the emission angles are calculated once and tabulated in a FITS (Flexible Image Transport System) table. During the data analysis process, the values of the transfer function corresponding to the set of input parameters are entered into Eq. (3) using interpolation schemes. The table has a grid of three dimensions, namely the black hole spin parameter a * , the deformation parameter 4 α 13 , and the viewing angle ι, which are sampled in a grid of 30 × 30 × 22, respectively. The spin parameter grid is distributed in such a way that it becomes denser as the spin increases, as the change of the ISCO becomes The inner edge of the disk is set at the ISCO radius and the emissivity profile is described by a power-law with emissivity index q = 3. The radiation illuminating the disk has the photon index Γ = 2 and the high-energy cutoff Ecut = 300 keV. The iron abundance of the disk is A Fe = 1 (solar iron abundance) and its ionization is log ξ = 3.1. faster at higher values of a * . The grid of emission angles is evenly distributed in 0 < cos ι < 1. The grid of the deformation parameter α 13 is evenly distributed in the range [−5, 5] or the range imposed by constraints on α 13 , whichever is smaller. For each set of grid points (a * , α 13 , ι), the accretion disk is divided into 100 emitting radii from r ISCO to 1000 r g , where r g = G N M/c 2 is the gravitational radius of the black hole. For every emitting radius, the minimum and maximum redshift factors are calculated, as well as the transfer functions and the emission angles for 40 evenly spaced values of g * on each branch. The general relativistic ray tracing code, fully described in Abdikamalov et al. (2019), calculates the parameters required for the FITS file. The code solves the photon trajectories backwards in time from the observer's screen to the accretion disk. As we do not know a priori where every photon lands, an adaptive algorithm samples the initial conditions of the photon so that the photon lands at the desired value of r e to calculate the redshift factor and the emission angle. The Jacobian and then the transfer function are calculated by firing two photons that land at the same emission radius r e and two more adjacent photons landing at neighboring radii. Finally, a separate script processes all photon data and generates the FITS file for a specific value ofṁ. The current model structure does not allow the Eddington ratio to be a model parameter, as this would result in a too large FITS file. We thus have FITS tables for specific mass accretion rates (0%, 5%, 10%, 20%, and 30%). Fig. 2 shows our synthetic reflection spectra in the Kerr spacetime for different values of the black hole spin parameter a * , inclination angle of the disk ι, and mass accretion rateṁ. The X-ray spectrum of MCG-06-30-15 normally has a prominent and very broad iron line, so it is quite a good candidate for precision measurements from the analysis of its relativistic reflection features (Brenneman & Reynolds 2006). Marinucci et al. (2014) analyzed simultaneous observations of NuSTAR and XMM-Newton and inferred the black hole spin parameter a * = 0.91 +0.06 −0.07 . The same data set was analyzed in Tripathi et al. (2019) with relxill nk to test the Kerr nature of the compact object and we constrained the deformation parameters α 13 and α 22 of the Johannsen metric; we found α 13 = 0.00 +0.07 ray data from various missions confirm that the spectrum of MCG-06-30-15 is modified by the presence of warm absorbers (Otani et al. 1996;Lee et al. 2000;Young et al. 2005). Optical observations reveal the presence of a dusty neutral absorber (Turner et al. 2003(Turner et al. , 2004. Besides the complex spectrum, MCG-06-30-15 is also highly variable. Data reduction NuSTAR and XMM-Newton observed MCG-06-30-15 simultaneously starting from 2013 January 29 for a total duration of about 360 ks. Tab. 1 shows the details of the observations used in the present work. The high variability of the source is evident in its light curve in Fig. 3. The flux changes by a factor of 5 during these observations. NuSTAR ) observed MCG-06-30-15 with its focal plane modules (FPM) A and B for three consecutive observations for about 360 ks. The raw data are processed to the clean event files using the NUPIPELINE routine of the NUSTARDAS package distributed as a part of high energy analysis package HEASOFT and CALDB v20180312. The source region is selected around the center of the source with the radius of 70 arcsec. A background region with the radius of 100 arcsec is taken on the same detector as far as possible from the source. Spectra, response files, and ancillary files are generated using the routine NUPROD-UCTS. Requiring to oversample the spectral resolution by at least a factor of 2.5, we find that we need to rebin the data to have at least 70 counts per bin. More details can be found in Tripathi et al. (2019). XMM-Newton (Jansen et al. 2001) observed MCG-06-30-15 with its EPIC-Pn and EPIC-MOS1/2 cameras for three consecutive revolutions for about 315 ks. These cameras operated in medium filter and small window mode (Jansen et al. 2001 data for our analysis because they are strongly affected by pile-up. The raw data for each revolution is converted into cleaned event files using SAS v16.0.0 and then combined into single event files. Good time intervals (GTIs) are generated using TABTIGEN and then used in filtering the combined cleaned event file. For the source, a circular region with the radius of 40 arcsec is taken around its center. A background region with the radius of 50 arcsec is taken as far as possible from the source center and on the same detector. Response files and Ancillary files are generated after backscaling. The spectra are binned such that we oversample the instrument resolution by a factor of 3 and also to a minimum of 50 counts per bin in order to apply χ 2 -statistics. More details are presented in Tripathi et al. (2019). Because of the high variability of the source, strictly simultaneous flux-resolved data are used for the spectral analysis. FPMA, FPMB, and EPIC-Pn data are divided into four flux states (low, medium, high, veryhigh) in such a way that the photon count in every state is similar. The same flux-resolved technique was used in Tripathi et al. (2019). Marinucci et al. (2014) adopted a different method. They grouped the data into different time intervals according to their hardness. In order to have strictly simultaneous data, the GTIs from the instruments were combined using the ftool MGTIME. From every instrument (FPMA, FPMB, EPIC-Pn), we have four spectra corresponding to the four different flux states (low, medium, high, very-high), so twelve spectra in total. We fit all the flux states simultaneously. For each flux state, we freeze the calibration constant of EPIC-Pn to 1 and the calibration constants of FPMA (C FPMA ) and FPMB (C FPMB ) are left free. We find that the values of the ratio C FPMB /C FPMA are always in the range 0.95 to 1.05, which agrees with the standard crosscalibration value between the two instruments. Fig. 4 shows the data to the best-fit model ratio when we fit the low flux state data with an absorbed powerlaw. Below 3 keV, we see residuals due to warm ionized absorbers (Lee et al. 2000;Sako et al. 2003) and an excess of photon count, which is quite common in AGN (Gierliński & Done 2004;Crummy et al. 2006;Miniutti et al. 2009;Walton et al. 2014). Above 3 keV, we see a prominent and broad iron line peaked around 6 keV and a Compton hump above 10 keV. In order to fit the whole spectrum of the source, we add a relativistic reflection component, produced from the inner region of the accretion disk, and a non-relativistic reflection component, produced by illumination of some cold material far from the compact object. These components are modified by two warm ionized absorbers and a dust neutral absorber. The latter is constructed by studying the Chandra data of this source. A narrow emission line and a narrow absorption line are also seen in the spectra. In XSPEC language, the model reads tbabs×dustyabs×warmabs 1 ×warmabs 2 ×(cutoffpl +relxill nk+xillver+zgauss+zgauss) . tbabs describes the Galactic absorption and has the hydrogen column density N H as its only parameter (Wilms et al. 2000). Following Dickey & Lockman (1990), we freeze N H to the value 3.9 × 10 20 cm −2 . dustyabs describes a dust neutral absorber (see Lee et al. 2000, for the details). warmabs 1 and warmabs 2 are two warm absorbers and they have a column density and an ionization as their parameters. The presence of two warm absorbers was determined in Tripathi et al. (2019) comparing the χ 2 of models with a different number of warm absorbers. We also note that Marinucci et al. (2014) found two warm absorbers in this data set from the analysis of RGS data. cutoffpl models the primary emission from the Comptonized corona with an exponential high-energy cutoff. relxill nk describes the relativistic reflection component Abdikamalov et al. 2019). In this paper, we use the version in which the spacetime is described by the Johannsen metric with non-vanishing deformation parameter α 13 (Johannsen 2013) and the accretion disk has a finite thickness . xillver describes the non-relativistic reflection component generated by illumination by the corona of some cold material relatively far from the central black hole (García & Kallman 2010). zgauss is a redshifted Gaussian profile and is used to fit two remaining prominent features. One of them can be interpreted as a narrow emission oxygen line at 0.81 keV. The other feature is a narrow absorption line at 1.22 keV and can be interpreted as a blueshifted oxygen absorption line due to some relativistic outflow (Leighly et al. 1997). These two lines are relatively prominent in every flux state; see the single flux state analysis in Tripathi et al. (2019), in particular their Tab. 5. We do not see any significant energy shift among different flux states. The normalization of the 0.81 keV line is of the order of 10 −2 , while the 1.22 keV line is less prominent and its normalization is of the order of 10 −5 . These features were found even in Marinucci et al. (2014), see their We fit the four flux states together. For NuSTAR, we use the FPMA and FPMB data in the energy range 3-80 keV. For XMM-Newton, we use the data in the energy range 0.5-10.0 keV, but we exclude the data in the energy range 1.5-2.5 keV because of instrumental issues as discussed in Marinucci et al. (2014) and Tripathi et al. (2019). The data below 0.5 keV and above 10.0 keV are ignored because of their poor quality and strong background, respectively. The hydrogen column density N H in tbabs is frozen to the same value for all flux states while the iron column density in dustyabs is free in the fit but is still kept constant over the four flux states, as there are no reasons to have any appreciable variation of its value over the timescale of the observation. See Lee et al. (2000) for more details about the absorption of this source. In the two warm absorbers, warmabs 1 and warmabs 2 , their column density and ionization are free in the fit and are allowed to vary among different flux states: these are warm ionized clouds around the source and their timescale variability can be short. Even the primary emission from the corona, here described by cutoffpl, can change over short timescales, so its photon index, high-energy cutoff, and normalization are all free in the fit and are allowed to vary between different flux states. In relxill nk, the emissivity profile is described by a broken power-law and we have thus three parameters: inner emissivity index q in , outer emissivity index q out , and breaking radius R br . Initially they are all free in the fit and are allowed to vary between different flux states, as the illumination of the disk may change from one flux state to another flux state. However, we find that q out is close to 3 in all flux states. We thus repeat the fit with the outer emissivity index frozen to 3 in all flux states. The normalization of relxill nk is allowed to vary among different flux states, as the reflection component depends on the primary emission from the corona, which may be different among different flux states. The reflection fraction in relxill nk is frozen to −1 as the continuum from the corona is already described by cutoffpl. The pa-rameters that are not thought to vary over the timescale of the observation have their value tied between different flux states: black hole spin parameter a * , viewing angle ι, deformation parameter α 13 , and iron abundance A Fe . All other free parameters in the model are allowed to vary between different flux states, including the ionization parameter ξ, as the illumination of the accretion disk can change too. The mass accretion rate,ṁ, which is the parameter regulating the thickness of the disk, is frozen to 0 (Model 0), 0.05 (Model 1), 0.1 (Model 2), 0.2 (Model 3), and 0.3 (Model 4). We impose the same value of normalization of xillver for all flux states as the emission of the distant reflector is not expected to vary substantially between different flux states. Its values of the photon index and highenergy cutoff are tied to the values of these parameters in cutoffpl and are allowed to be different for different flux states. The ionization parameter in xillver is frozen to 0 as the distant reflector is supposed to be far from the central black hole and the material should be neutral. The iron abundance of the distant reflector is frozen to the solar value. The results of the fits for Models 0-4 are all very similar. Fig. 5 shows the best-fit models and the data to the best-fit model ratios for the four flux states for Model 0 (ṁ = 0, infinitesimally thin disk). We do not show the same figures for Models 1-4 because we do not see any clear difference. Tab. 2, Tab. 3, and Tab. 4 show the summary of the best-fit values for Models 0-4. Again, the results are quite similar and the difference of χ 2 among the different models is small. We also note that all our measurements are very similar to the estimates found in Tripathi et al. (2019) with an earlier version of relxill nk and assuming an infinitesimally thin disk. A direct comparison with Marinucci et al. (2014) is not straightforward, because they use a different scheme to deal with the source variability, but still we recover consistent results (the minor discrepancy on the black hole spin can be explained with the angle-averaged reflection model used in Marinucci et al. 2014, see Section 5). The constraints on the black hole spin parameter a * and the Johannsen deformation parameter α 13 for Models 0-4 and after marginalizing over all other free parameters are reported in Fig. 6, where the red, green, and blue curves are, respectively, the 68%, 90%, and 99% confidence level limits for two relevant parameters. The discussion of our spectral analysis of MCG-06-30-15 is postponed to Section 5. EXO 1846-031 EXO 1846-031 was first detected by EXOSAT on 1985 April 3 (Parmar & White 1985) and only later it was identified as a low mass X-ray binary (Parmar et al. 1993). A second outburst of this source was detected in 1994 by CGRO/BATSE (Zhang et al. 1994). After about 25 years of quiescence, MAXI observed a third outburst of EXO 1846-031 in July 2019 (Negoro et al. 2019). Following the MAXI detection, the source was observed with other instruments. In this section, we use the NuSTAR data, first analyzed in Draghis et al. (2020). -06-30-15. Summary of the best-fit values for Model 0 (ṁ = 0) and Model 1 (ṁ = 0.05). The ionization parameters (ξ, ξ 1 , ξ 2 , and ξ ) are in units erg cm s −1 . The reported uncertainties correspond to the 90% confidence level for one relevant parameter (∆χ 2 = 2.71). indicates that the value of the parameter is frozen in the fit. (P) means that the 90% confidence level reaches the upper boundary of the black hole spin parameter (a max * = 0.998). q in is allowed to vary in the range 0 to 10. of MCG-06-30-15 analyzed in the previous section, the NuSTAR light curve of EXO 1846-031 shows minimal variability in the count rate and therefore we can directly use the time-averaged spectrum in our spectral analysis. The raw data are reduced to cleaned event files using the NUPIPELINE routine of the HEASOFT package with CALDB v20200912. For the source spectra, we take a circular region with the radius of 180 arcsec centered around the source. We extract a background region of the same size as far as possible from the source on the same detector to avoid any source contamination. The ancillary and response files are generated through the NUPRODUCT routine. The source spectra are rebinned to have a minimum of 30 counts per bin in order to use χ 2 -statistics. Spectral analysis We fit the FPMA and FPMB spectra together in the energy range 3-80 keV. To start, we fit the data with an absorbed power-law and the data to the best-fit model ratio is shown in Fig. 7. We clearly see a broad iron line peaked around 7 keV and a Compton hump peaked around 30 keV. The fact that the iron line is so broad already suggests that the inner edge of the accretion disk extend to a region very close to the black hole event horizon. To fit these reflection features, we use relxill nk with the spacetime described by the Johannsen metric with non-vanishing deformation parameter α 13 and an accretion disk of finite thickness. It is the same version of relxill nk as that used in the previous section for MCG-06-30-15. The inner edge is set at the ISCO radius and the outer edge is left at the default value of 400 r g . The emissivity profile is modeled with a broken power-law and we have three free parameter: the inner emissivity index q in , the outer emissivity index q out , and the breaking radius R br . The reflection fraction cannot be constrained well and therefore it is frozen to 1, as in Draghis et al. (2020). As in the previous section for the spectral analysis of MCG-06-30-15, the mass accretion rate,ṁ, is frozen to 0 (Model 0), 0.05 (Model 1), 0.1 (Model 2), 0.2 (Model 3), and 0.3 (Model 4). The fit of the model tbabs×relxill nk still presents some residuals at low energies, which we remove by adding a continuum component from the accretion disk with diskbb. The latter is a Newtonian model for an infinitesimally thin disk, but the residual is small and it is not necessary to employ a more sophisticated model. We still see an absorption feature around 7 keV, which can be interpreted as absorption by material in the disk wind at a relatively high inclination angle. Similar features are observed in other sources, for instance 4U 1630-472 (King et al. 2014). We model this absorption feature with a Gaussian. The fluxes of FPMA and FPMB clearly appears different at low energies. As discussed in Madsen et al. (2020), it is likely an instrumental issue in FPMA. Here we follow the same procedure as in Draghis et al. (2020). First, we fit the FPMA and FPMB spectra together with a free constant cross-calibration and ignoring the 3-7 keV energy band. We freeze the cross-calibration constant to the value found by our fit and we include the data in the 3-7 keV energy band. At this point, we add the multiplicative table described in Madsen et al. (2020). We fix the MLI fraction in the FPMB spectrum to 1 and we allow the MLI faction in the FPMA spectrum to vary. In the end, our total model is tbabs×nuMLI×(relxill nk+diskbb+guassian) . Tab. 5 shows the best-fit values for Models 0-4. The best-fit model and the data to the best-fit model ratio for Model 0 are shown in Fig. 8. Here we do not report the same figures for Models 1-4 but they are all very similar to Fig. 8. Fig. 9 shows the constraints on the black hole spin parameter a * and the Johannsen deformation parameter α 13 for Models 0-4. The red, green, and blue curves represent, respectively, the 68%, 90%, and 99% confidence level limits for two relevant parameters. The discussion of our results is in the next section. Within our program of development of the reflection model relxill nk, in Abdikamalov et al. (2020) we presented a version in which the accretion disk has a finite thickness, following the implementation first proposed in Taylor & Reynolds (2018a). In the present work, we have used that new version of relxill nk to fit the high-quality spectra of two sources in order to evaluate the impact of the disk thickness on the estimate of the model parameters, and in particular on the measurement of the black hole spin parameter a * and the Johannsen deformation parameter α 13 . For this study, we have considered the 2013 simultaneous observations of NuSTAR and XMM-Newton of MCG-06-30-15 and the 2019 NuS-TAR observation of EXO 1846-031. Previous analyses of these spectra had reported very precise constraints on a * and α 13 , so that changes in the theoretical model can potentially lead to a different measurement of these parameters. Moreover, these two sources are different, as MCG-06-30-15 hosts a supermassive black hole observed from a low viewing angle and EXO 1846-031 hosts a stellar-mass black hole observed from a high viewing angle. Previous analyses had also reported a different disk's emissivity profiles for the two sources, suggesting a different coronal geometry, with MCG-06-30-15 with a high value of the inner emissivity index and a value of the outer emissivity index consistent with 3 and EXO 1846-031 with a high value of the inner emissivity index and a value of the outer emissivity index close to 0. The spectrum of every source has been fit with five different Eddington-scaled mass accretion rates: 0% (corresponding to the standard model with an infinitesimally thin disk), 5%, 10%, 20%, and 30%. DISCUSSION AND CONCLUSIONS For MCG-06-30-15, the five models provide very similar results, as we can see in Tab. 2,Tab. 3,and Tab. 4. These results are also very similar to those found in our previous analysis with an older version of relxill nk and an infinitesimally thin accretion disk (Tripathi et al. 2019), and are consistent with those reported in Marinucci et al. (2014), even if a direct comparison is not possible for the parameters that vary among flux states because Marinucci et al. (2014) use a different scheme. We note that here and in Tripathi et al. (2019) we find In the best-fit model plots, the total spectrum is in black, the cutoffpl component is in green, the relxill nk component is in red, and the xillver component is in blue. In the ratio plots, blue crosses are for NuSTAR/FPMA, red crosses for NuSTAR/FPMB, and green crosses for XMM-Newton. 6.-Constraints on the black hole spin parameter a * and the Johannsen deformation parameter α 13 for the black hole in MCG-06-30-15 from Models 0-4 after marginalizing over all the free parameters. The red, green, and blue curves represent, respectively, the 68%, 90%, and 99% confidence level curves for two relevant parameters. The gray region is ignored in our analysis because the spacetime is pathological there. The thick horizontal line at α 13 = 0 marks the Kerr solution. -Best-fit model and data to the best-fit model ratio for EXO 1846-031 for Model 0 with an infinitesimally thin accretion disk. In the best-fit model plot, the total spectrum is in black, the relxill nk component is in red, and the diskbb component is in cyan. In the ratio plot, blue crosses are for NuSTAR/FPMA and red crosses for NuSTAR/FPMB. a somewhat higher value of the black hole spin parameter than that inferred in Marinucci et al. (2014). Such a discrepancy is mainly due to the difference between the angle-resolved reflection model relxill/relxill nk and the angle-averaged model relconv×xillver employed in Marinucci et al. (2014) (more details can be found in Tripathi et al. 2020). We also note that the χ 2 of Models 0-4 are very similar. Model 3 (ṁ = 0.2) has the lowest χ 2 (3027.47) and Model 1 (ṁ = 0.05) has the highest one (3028.67): the difference is only ∆χ 2 = 1.20. The weak impact of the disk thickness on the reflection spectrum of a high spin/high radiative efficiency source observed from a small viewing angle is already clear in the plot in the top-right corner in Fig. 2. Even if our current version of relxill nk cannot haveṁ free, we can easily argue that we could not measure it in MCG-06-30-15, in the sense that any attempt to infer its value from the reflection spectrum would give an unconstrained parameter. The analysis of the NuSTAR spectrum of EXO 1846-031 leads to similar, even if not identical, conclusions. Tab. 5 shows the best-fit values of the model parameters. As can be also seen from Fig. 9, the thickness of the disk does not seem to have any significant impact on the estimates of the black hole spin parameter a * and the Johannsen deformation parameter α 13 : the final constraints are extremely similar for Models 0-4. This is somewhat more remarkable, because the source is observed with a very high inclination angle, so a region of the very inner part of the accretion disk may be obscured. As we can see from the bottom-right panel in Fig. 2, spectra for differentṁ are not very similar as in the case of a low viewing angle. Note, however, that the extremely high estimate of the spin parameter of the source indicates a very high radiative efficiency and, therefore, the thickness of the disk should be quite modest; see Fig. 1. For EXO 1846-031, we find a more pronounced difference among the χ 2 of models with different mass accretion rates. The lowest χ 2 is found for the infinitesimally thin disk model (Model 0, 2751.86) and χ 2 monotonically increases asṁ increases, reaching χ 2 = 2758.18 for Model 4 withṁ = 0.3. Such a trend, which cannot be seen in the analysis of MCG-06-30-15, where all fits are very similar because of the low viewing angle, should be investigated better in the future with other spectra: it seems like the model with an infinitesimally thin disk fits the data better than the models with a disk of finite thickness and thus the disk structure employed in relxill nk is unsuitable to describe real accretion disks around black holes, or at least the accretion disk around the black hole in EXO 1846-031 during the NuSTAR observation. We note that the mass and the distance are poorly constrained for EXO 1846-031. Draghis et al. (2020) estimate that the Eddington-scaled disk luminosity in this observation is in the range 0.06 to 0.25 assuming the black hole mass M = 9 ± 5 M (from the continuum-fitting method with 1-σ error) and the distance D = 7 kpc. If we calculate the 0.1-100 keV unabsorbed flux for our best-fit model, we recover a similar result. However, the uncertainty is so large that we have no indication of which value ofṁ we should use to fit the reflection spectrum. The thickness of the disk seems to have the effect of decreasing the estimate of the inner emissivity index q out and increasing the measurement of the breaking radius R br . The estimate of the other model parameters does not show any variation with different values of the mass accretion rateṁ. The iron abundance, A Fe , is estimated to be below the solar abundance, regardless of the value ofṁ, and this might be caused by the fact that our model does not include the returning radiation in the calculation of the reflection spectrum and the viewing angle of the source is high (see the discussion in Riaz et al. 2021). The impact of the thick disk model on the estimate of q out and R br indicates the presence of a correlation between the thickness of the disk and the emissivity profile (and, consequently, the coronal geometry). If we freeze q in , q out , and R br to the best-fit values found with the infinitesimally thin disk model and we refit the data with Models 1-4, we find that only the estimate of the inclination angle of the disk changes, see Tab. 6, while all other parameters, including the black hole spin and the deformation parameter α 13 , do not show any appreciable difference. Such a result support the claim of the robustness of the estimates of the black hole spin parameter and of the deformation parameter α 13 , at least in the high radiative efficiency regime. -Constraints on the black hole spin parameter a * and the Johannsen deformation parameter α 13 for the black hole in EXO 1846-031 from Models 0-4 after marginalizing over all the free parameters. The red, green, and blue curves represent, respectively, the 68%, 90%, and 99% confidence level curves for two relevant parameters. The gray region is ignored in our analysis because the spacetime is pathological there. The thick horizontal line at α 13 = 0 marks the Kerr solution. TABLE 6 Best-fit values of the inclination angle of the disk, ι, and χ 2 /dof for EXO 1846-031 for Models 1-4 when we fit the data freezing q in , qout, and R br to the best-fit values found in the fit of Model 0. The estimates of the other parameters do not show any appreciable difference. In conclusion, our analysis of MCG-06-30-15 and EXO 1846-031 suggests that the thickness of the disk has quite a modest impact on the fit of the reflection features in general, and on the estimate of the black hole spin parameter and the Johannsen deformation parameter in particular. We want to remark that we are still assuming a Novikov-Thorne accretion disk with inner edge at the ISCO radius. The accretion disk is thus geometrically thin, and for this reason we limit the model to the rangeṁ = 0-0.3; we simply have a thin disk of finite thickness and the thickness increases as the mass accretion rate increases. The scenario studied in Riaz et al. (2020a) and Riaz et al. (2020b) was different: in those papers we simulated the reflection spectra of geometrically thick accretion disks and we found we could easily get very precise but inaccurate estimates of the black hole spin parameter and the Johannsen deformation parameter, which is the opposite conclusion of the present work. We would like also to point out that our finding is not inconsistent with the conclusions of Taylor & Reynolds (2018a), where the authors fit some simulated spectra and find that the thickness of the disk can lead to inaccurate estimates of the black hole spin parameter and the disk's inclination angle. In Taylor & Reynolds (2018a), the simulations are done assuming a black hole spin a * = 0.9 (so with a lower radiative efficiency with respect to our sources) and a lamppost coronal geometry. If the height of the lamppost is low, the intensity profile is very different from the case of an infinitesimally thin disk, where at large radii the emissivity should scale as 1/r 3 . Here we fit the data of two sources in which the geometry of the corona is unknown, but presumably different from the lamppost set-up (which, assuming an infinitesimally thin disk, provides a fit worse than the broken-power law model). The disk's intensity profile, which depends on the coronal geometry, seems thus to be crucial to determine whether the thickness of the disk has or does not have an important impact on the estimate of the model parameters. This is consistent with the conclusions found in Riaz et al. (2020a) and Riaz et al. (2020b) for geometrically thick disks, where the estimate of the black hole spin parameter is strongly determined by the disk's emissivity profile when we use an incorrect theoretical model.
2021-02-10T02:16:05.278Z
2021-02-09T00:00:00.000
{ "year": 2021, "sha1": "15f2913d8a35ab139a492f3cb0e83a4ce5d91409", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2102.04695", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "15f2913d8a35ab139a492f3cb0e83a4ce5d91409", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
3463468
pes2o/s2orc
v3-fos-license
Structural Level Differences in the Mother-to-Child HIV Transmission Rate in South Africa: A Multilevel Assessment of Individual-, Health Facility-, and Provincial-Level Predictors of Infant HIV Transmission Objectives: In 2010, South Africa reported an early mother-to-child transmission (MTCT) rate of 3.5% at 4–8 weeks postpartum. Provincial early MTCT rates ranged from 1.4% [95% confidence interval (CI): 0.1 to 3.4] to 5.9% (95% CI: 3.8 to 8.0). We sought to determine reasons for these geographic differences in MTCT rates. Methods: This study used multilevel modeling using 2010 South African prevention of mother-to-child transmission (PMTCT) evaluation (SAPMTCTE) data from 530 facilities. Interview data and blood samples of infants were collected from 3085 mother–infant pairs at 4–8 weeks postpartum. Facility-level data on human resources, referral systems, linkages to care, and record keeping were collected through facility staff interviews. Provincial level data were gathered from publicly available data (eg, health professionals per 10,000 population) or aggregated at province-level from the SAPMTCTE (PMTCT maternal-infant antiretroviral (ARV) coverage). Variance partition coefficients and odds ratios (for provincial facility- and individual-level factors influencing MTCT) from multilevel modeling are reported. Results: The provincial- (5.0%) and facility-level (1.4%) variance partition coefficients showed no substantive geographic variation in early MTCT. In multivariable analysis accounting for the multilevel nature of the data, the following were associated with early MTCT: individual-level—low maternal–infant ARV uptake [adjusted odds ratio (AOR) = 2.5, 95% CI: 1.7 to 3.5], mixed breastfeeding (AOR = 1.9, 95% CI: 1.3 to 2.9) and maternal age <20 years (AOR 1.8, 95% CI: 1.1 to 3.0); facility-level–insufficient (≤2) health care-personnel for HIV-testing services (AOR = 1.8, 95% CI: 1.1 to 3.0); provincial-level PMTCT ARV (maternal–infant) coverage lower than 80% (AOR = 1.4, 95% CI: 1.1 to 1.9), and number of health professionals per 10,000 population (AOR = 0.99, 95% CI: 0.98 to 0.99). Conclusions: There was no substantial province-/facility-level MTCT difference. This could be due to good overall performance in reducing early MTCT. Disparities in human resource allocation (including allocation of insufficient health care personnel for testing and care at facility level) and PMTCT coverage influenced overall PMTCT programme performance. These are long-standing systemic problems that impact quality of care. INTRODUCTION By mid-2014, an estimated 170,000 children in low-and middle-income countries were infected annually with HIV through mother-to-child HIV transmission (MTCT). 1 The World Health Organization outlines a case rate of new pediatric HIV infections #50 per 100,000 livebirths as one of the minimum criteria for validating elimination of MTCT of HIV among children. 2 Sub-Saharan African countries have shown substantial progress in reducing MTCT rates, but achievements vary considerably across populations. 3 In South Africa, though the national early (4-8 weeks postpartum) transmission rate from 2 consecutive national surveys (2010,2011) is reported at 3.5% [95% confidence interval (CI): 2.9 to 4.1] and 2.7% (95% CI: 2.1 to 3.2), respectively, transmission rates vary between provinces ranging from 1.4%-5.9% in 2010 and 2.0%-6.1% in 2011. 4,5 What explains the variation in transmission rates across provinces is not well understood. Studies show that the South African health system suffers from historical inequity in health resource allocation among and within provinces. 6 Trends in provincial and local government primary health care (PHC) expenditure per capita (uninsured population) for the years 2011/2012-2014/2015 show an annual minimum 30% difference between the province with the lowest PHC expenditure per capita and the province with the highest. [7][8][9] Limited research has been done to assess whether the disparity in resource allocation between provinces, and other facility-and provincial-level factors, has any impact on the performance of the prevention of mother-to-child transmission of HIV (PMTCT) programmes across and within provinces. 10 This article aims to understand the contribution of factors at 3 levels, namely, individual, facility, and provincial, to the early (4-8 weeks postpartum) MTCT measured in each province of South Africa using data from 2010 South African PMTCT Evaluation (SAPMTCTE). Study Design, Sample Size, Sampling The detailed methods used in this study have been described elsewhere. 11 In brief, the SAPMTCTE, from which the early MTCT and individual-level factors have been obtained, was based on a complex survey design. The total population was first stratified by province (n = 9) and within each province, public PHC clinics and community health centers (CHCs) were stratified into 3 groups based on their 6-week annual immunization numbers (extracted from the 2007 district health information system data) and antenatal HIV prevalence of the district (from 2009 antenatal survey): small [,130 annual diphtheria-tetanus-pertussis21 (DTP1) coverage], medium (130-300 annual DTP1 coverage), large ($300 annual DTP1 coverage) with below the 2009 national average antenatal HIV prevalence (,29%), and large with above the 2009 national average antenatal HIV prevalence ($29%). 12 This was followed by a selection of facilities using probability proportional to size (with replacement) sampling methods. At the final step, a fixed number of individual mother-infant pairs attending 6-week immunization visits were recruited consecutively or systematically from each selected facility within a specified period. Individual mother-infant pairs represent the lower-level (level 1) units, who are nested within health facilities (level 2) and health facilities are nested within provinces (level 3) providing a natural 3-level hierarchy. A detailed description of sample size is presented elsewhere. 4 In summary, the sample size calculation was targeted to provide national and provincial level stable estimates of transmission rates. The following indicators were taken into account in sample-size calculation: the 2009 antenatal HIV prevalence, 13 transmission rate estimates from 2 previous regional surveys, 14,15 and the coverage of PMTCT antiretroviral (ARV) prophylaxis in each province from district health information system reports, with varying MTCT precision levels by province (ranging from 1% to 2%) and a design effect of 2 to account for clustering within health facilities. Based on this, a desired sample size [ie, the collection of interview data and dried blood spots (DBSs) of infants] of 12,200 mother-infant pairs from 580 facilities was needed. Individual-Level Indicators Mother/caregiver-infant pairs visiting 6-week immunization services at the selected 580 facilities were approached and screened for eligibility by trained nurses. Mothers/caregivers were recruited if their infants were 4-8 weeks old, were receiving DPT1 vaccination that day, had no emergency illness, and mother/caregiver consented to participate in the study. Those who gave consent were interviewed on antenatal and peripartum services received, socio-demographic indicators, and knowledge about PMTCT and PMTCT services received. The infant Road-To-Health-Card was checked for documentation of maternal and infant HIV status, gestational age at birth, and birth weight. We spent 3 weeks in 8 provinces and 4 weeks in 1 province (Northern Cape) in each facility for data collection. Interview data were collected on hand-held devices (cell phone preprogrammed with a questionnaire). After the interview, individual pretest counseling was given to each mother and if mothers consented, DBSs of infants were collected using heel-prick. Blood specimens of infants were collected from all infants irrespective of prior knowledge of the HIV-exposure status of the infant. DBS specimens were tested for HIV antibodies by means of an enzyme immunoassay (EIA) (Genscreen HIV1/2 Ab EIA Version 2; Bio-Rad Laboratories, Schiltigheim, France). Health Facility-Level Indicators Data on health facility-level indicators were collected from 530 of the selected 580 facilities after a situational assessment conducted 6 months before the SAPMTCTE. Trained field workers used open-ended and close-ended questions to collect data on human resources for health (HRH), referral systems, record keeping, linkages, and organization of systems for PMTCT during structured interviews with clinic managers, district health information officers, immunization nurses, PMTCT nurses, and sickchild (integrated management of childhood illnesses) nurses. Provincial Level Indicators Provincial level indicators were obtained from publicly available reports from the National Treasury South Africa [on proportion of total provincial budget allocated to the Department of Health (DoH) and annual under spending of health budget], 16-18 Statistics South Africa (on provincial proportion whose income or consumption is below the upper-bound poverty line and proportion of children living in rural areas 19,20 ). Aggregated data were extracted from National DOH (NDOH) Antenatal Survey report (provincial HIV prevalence among pregnant women), 13 the South African Health Review, 21 and the 2010 District Health Barometer (proportion of under 18 facility deliveries, PHC expenditure per capita and percent of expenditure on PHC facilities). 22 Provincial health HRH data (total health professionals per 10,000 population) for 2010 were drawn from the NDoH Draft HRH Consultation and Strategy Document. 23 All provincial data gathered were for the year 2009/ 2010. Provincial perinatal PMTCT ARV regimen coverage and HIV transmission rates were aggregated from the current (SAPMTCTE) data. Statistical Analysis All HIV-exposed infants with individual-(interview and PCR-negative or PCR-positive test results), facility-and provincial-level information were included in this analysis. We used percentages, medians, and inter-quartile ranges (IQRs) to describe data at individual-, health facility-, and provincial-level. PMTCT ARV regimen coverage (at province-level) was defined as the proportion of HIVpositive mothers (identified by reactive infant EIA test) who received any maternal antiretroviral prophylaxis or treatment (cART) with infant nevirapine (NVP)/azidothymidine. The United Nations General Assembly (UNGASS) universal PMTCT coverage goal of $80% was used as the cut-off for "good" PMTCT ARV coverage. Based on this, provincial PMTCT ARV coverage was categorized into 2 categories: below 80% PMTCT ARV coverage and $80% PMTCT ARV coverage. A socio-economic score was created based on the availability of assets (television, car, refrigerator, stove), and dwelling characteristics (type of water source, toilet, fuel and building material) using the principal component analysis method. Provinces were ranked 1-9 according to their performance on each of the following 4 indicators: PMTCT ARV coverage, HRH (ie, number of health professionals per 10,000 population), budget (proportion of provincial budget allocated for DoH), and poverty measure (proportion below the upper-bound poverty line). We used multilevel mixed (MLM) effects logistic regression models with random facility-level and provinciallevel intercepts to examine correlates of MTCT at individual-, facility-, and provincial-level. The multilevel analysis was implemented in a stepwise manner starting with the unconditional model (null model) which was fitted to determine the significance of the 2 random effects (facility and province) and the intraclass correlation coefficient that describes the proportion of variance that is attributable to clustering at facility-and provincial-level. We used a likelihood ratio test to compare the null MLM model with a single level (ie, no intercept) logit model to determine the significance of the facility and province random intercepts. Three models were subsequently fitted by including (into the null model) individual-level factors (model 1), followed by health facility-level factors (model 2), and provincial-level factors (model 3). Individual-, facility-and provincial-level indicators were included in the model if their P value in a bivariate analysis was below 0.2. The MLM models were weighted at both facility-(level 2) and individual-levels (level 1). The level 1 weights were computed as the product of the population size (births) and the sample size realization weights. 24 There was no design weight at level 1 as we did a period census in each facility and all mothers eligible up to the required sample size were included. The weight for level 2 (health facility-level) was calculated for each facility as the inverse of the sampling probability which was calculated taking into account the probability proportional to size sampling method. Provinces (level 3) were selected with full certainty so the weight for level 3 was equal to that of 1. For the MLM models, level 1 and level 2 weights were scaled using one of the methods discussed by Pfeffermann et al. 25 The method used here is to scale the weights so that their sum equals to the effective sample size. This method improves estimation of variance components when both level 1 and level 2 samplings are noninformative as is the case in our study. Descriptive analysis incorporated either facility-level nonscaled weights (for facility-level variables) or both facility-and individuallevel nonscaled weights (for individual-level variables and for provincial PMTCT ARV regimen coverage). From individual-level variables, gestational age at birth had the highest (23%) missing responses, and of the facilitylevel variables, 10%-20% of data were missing on a number of variables. To account for uncertainty arising from missing data, we performed multiple imputations using a multilevel random effects multiple imputation model, REALCOM Impute package (developed by Harvey Goldstein at the Centre for Multilevel Modeling in Bristol). We had no missing data for level 3 variables, therefore imputation was only performed for relevant individual-and facility-level variables. Information across levels was used to improve the quality of imputation. Variance coefficients from multiple imputed data sets were combined using Rubin's rule. 26 The variance partition coefficient (VPC) was calculated as a proportion of total variance explained by facility-and provincial-level random effects, respectively. The median odds ratio (MOR) which quantifies the between cluster (ie, province and facility) variance by comparing 2 persons from different clusters is reported. All analyses were done using STATA SE (version 12; StataCorp LP, College Station, TX). Study Sample The 2010 national SAPMTCTE achieved 83.4% (10,178) of the planned 12,200 sample size: 3 provinces achieved less than 75% of the target sample size, namely, Limpopo (LP) (73%), Eastern Cape (EC) (55%), and Northern Cape (NC) (63%). Out of the 10,178 enrolled study participants (infants) 3088 were HIV-exposed; 3085 of these infants, from 530 facilities, with both individual-(interview and PCR-negative or PCR-positive results) and facility-level data were included in this analysis. Category frequencies do not add to the total N because of missing responses. *People of mixed racial origin. †The socio-economic score was constructed from the following assets (television, car, refrigerator, and stove), and dwelling characteristics (water, toilet, fuel, and building material). We used this score to assess availability of basic assets/utilities in the house. Only 29.1% (the fourth level) and 9.5% (the fifth level) had the basic assets/utilities in the house which are flush toilet (in the house), pipe water (in the house), stove, refrigerator, TV, electricity, gas or paraffin for cooking, and brick/cement house. Participants in the fifth level also had car. ANC, antenatal care; SES, socio-economic score. Health Facility-Level Data Of the facilities visited 15.2% (65) were community health centers and 84.8% (465) were PHC Clinics. Most (71.1%) offered daily PMTCT services. In each facility, a median number of 7 (IQR 5-11) staff was allocated to provide HIV testing services (HTSs); 17.9% reported that the task of pre-and post-HIV-test counseling had shifted to lay counselors. Just more than half (51.6%) of selected facilities reported having a separate room allocated for post-test counseling of mothers, and the rest 48.4% reported that post-test counseling was provided in any available private space/consulting room (Table 1). (Table 2). Provincial-Level Indicators HRH distribution across provinces ranged from 33 health professionals per 10,000 total population in North West (NW) to .70 per 10,000 population in NC and WC provinces. Proportion of total provincial budget allocated for health was highest in WC (36.0%) and GP (33.0%) and lowest in MP (23.9%). In terms of socioeconomic indicators, most (.60%) GP and WC population were living above the upper-bound poverty line and were urbanized (with ,10.0% rural population) compared with the other 7 provinces (Table 2). When provinces were ranked according to their performance on PMTCT ARV coverage, budget allocation, poverty measure, and HRH indicators, WC and GP overall ranked as the best performing provinces, whereas LP, EC, and MP ranked as the least well-performing provinces. Multilevel Mixed Effect Model With Random Effects Only (Null Model) When the multilevel (3 level) model for MTCT was compared with the single-level (null) logistic regression model for MTCT, no significant (P value = 0.26) differences were found, showing that MTCT is not structurally correlated at the hierarchical level of province and facility (Table 3). Only a small proportion of the variance in early MTCT was attributable to the differences seen in MTCT rates across facilities (VPC = 5.0%; MOR = 1.5) and provinces (VPC = 1.4%; MOR = 1.2) ( Table 4). Bivariable and Multivariable Multilevel Models In bivariable and multivariable multilevel modelling, 3 individual-level indicators were significant predictors of early MTCT, namely, low uptake of maternal, infant, or both ARVs (dual prophylaxis or cART plus infant NVP) [adjusted odds ratio (AOR) 2.5, 95% CI: 1.7 to 3.5], feeding pattern (mixed breastfeeding vs other) (AOR 1.9, 95% CI: 1.3 to 2.9) and The table is sorted by transmission rate. Bold text indicates the 4 provinces with the lowest MTCT, lowest % children living in rural areas, lowest % living below the poverty line, highest PMTCT ARV regimen coverage; highest number of health professionals per 10,000 population, and highest % budget allocated to the health department. Numbers have been rounded to the nearest tenth decimal places. *Department of health human resource consultation document/report. †Treasury reports for 2010. ‡Statistics SA report for 2009-upper-bound poverty line is defined as-below average per capita spending on food and nonfood items (R577 per month for 2009). §Statistics SA report for 2010 (data analysed by Children's Institute, University of Cape Town). PMTCT ARV regimen coverage is defined as the proportion of HIV-positive mothers (per infant EIA test) enrolled in the SAPMTCTE in each province who received maternal zidovudine (azidothymidine) or triple antiretroviral treatment (cART). FS, Free State. maternal age below 20 years (AOR 1.8, 95% CI: 1.1 to 3.0) ( Table 5). The odds ratio for the relationship between feeding pattern and MTCT, and age ,20 years and MTCT was reduced when controlling for facility-level factors, and the odds ratio between ARV coverage and MTCT reduced when controlling for provincial level factors (Table 5). Of the facility-level indicators, in the unadjusted model, the number of health professionals allocated for HTS was an influential predictor of early MTCT. After adjusting for individual-and provincial-level factors, facilities that allocated #2 staff for HTS had a significantly higher odds of transmission (AOR 1.8, 95% CI: 1.1 to 3.0) compared with those who allocated more than 2 health personnel for providing HTS. Other facility-level variables (ie, indicators for referral system, record keeping, and infrastructure) were nonsignificant in both bivariable and multivariable multilevel modelling (Table 5). From provincial-level indicators, in adjusted models, provinces with lower than universal (80.0%) coverage for perinatal PMTCT ARV regimens had significantly higher (AOR 1.4, 95% CI: 1.1 to 1.9) MTCT compared with provinces that achieved universal coverage (Table 5). Provincial variation in HRH distribution was also a significant predictor of MTCT in adjusted models-for each additional health professional per 10,000 population, MTCT decreased by 0.01% (AOR 0.99, 95% CI: 0.98 to 0.997) ( Table 5). Other provincial indicators, such as provincial budget allocation for health and poverty measures were significantly associated with transmission only in a bivariable analysis and were nonsignificant in a multivariable analysis. DISCUSSION This study shows no substantive geographic (province-/ facility-level) differences in early MTCT. Although early MTCT was associated with both individual and aggregate/ contextual (facility-and provincial-level) factors, the overall contribution of aggregate level factors to early MTCT was modest (provincial VPC = 1.4% and facility VPC = 5%). The lack of significant geographic variation in MTCT could be due to good overall performance across provinces and facilities in reducing perinatal MTCT. Despite a moderate effect on MTCT, some of the aggregate-level factors identified in this study are longstanding problems of the health care-system in South Africa with reported serious impact on quality of care. 10,27,28 Inequitable HRH distribution-one of the provincial level factors identified as influential predictor of MTCT in this study has been reported as a primary bottleneck for delivering quality health care in South Africa. 10,27,28 At facility level, we found allocation of #2 staff for HIV-testing services as a significant risk factor for MTCT. In the new South African national HTS guideline released in 2016, HTS staff are tasked with a number of responsibilities including providing pretest information, HIV testing, posttest counseling, and active referral of HIV-positive clients to ART clinic: recommended referral methods include escorting HIV-positive clients to ARV clinics (if within same facility) or setting-up appointment at the receiving facility (if ART is not provided within the same facility). 29 While these services are important, facilities with only 2 staff allocated for HTS could struggle to provide these services at an acceptable level of quality. More health workers, with the right mix of skills, are needed to provide HIV services that are at acceptable standard. We recommend implementing effective recruitment and retention strategies (including appropriate selection of students and training of health professionals in rural areas, financial incentives, and capacity building support), task shifting, introducing patient appointment system, and decentralization of service to lower-level care to redress the inequitable distribution and inadequate staffing of PHC facilties. 27,30,31 The provinces with the least HRH allocation (NW, EC, MP, and LP) need to be prioritized in redressing the inequitable distribution of HRH. In addition, the disparities in HRH within province (eg, between rural and urban facilities) should be addressed. The provinces that achieved the UNGASS PMTCT ARV regimen target ($80% coverage) had a significantly lower MTCT rate compared with provinces that did not achieve the UNGASS target. Three of the provinces in this study achieved at least 80% ARV coverage-early MTCT in 2 of the 3 provinces was #2.5%. This low MTCT levels achieved with PMTCT coverages of 80% and above show promise for targets to eliminate MTCT by 2020. The study limitations are acknowledged. The lack of significant variability at province-and facility-level could be a result of 2 features: first, the small number of infections overall; and second, the even smaller number of infections at the facility-level. Given that the prevalence of our outcome measure (ie, MTCT) is small, a large sample size would be needed at level 1 (individual-level) to precisely measure provincial-and facility-level HIV transmission rates. In our study, the sample size achievement at province-level was below the required sample size, with 3 of 9 provinces achieving below 75% of the required sample size. As a result, although the MTCT point estimates varied across provinces (ranging between 1.4% and 5.9%), the CIs around these estimates were fairly wide implying unstable estimates or large variance of the provincial MTCT point estimates. As South Africa contains only 9 provinces, the available observations at level 3 (provincelevel) in our study were also slightly fewer than the recommended minimum sample size limit-a minimum of 10-30 sample size is recommended (at level 3) for multilevel modeling whereas our sample size (n = 9) was close to the lower limit (10). 32 In conclusion, in this study, there were no substantial province-/facility-level MTCT differences because of good overall performance in reducing early MTCT. However, facility-and provincial-level factors play a role in the relationship between individual-level factors and MTCT. Most of the facility-/province-level factors examined (such as human resource) are long-standing problems of the health care-system in South Africa. Plans to improve overall maternal and child health outcome indicators should aim to address these aggregate as well as individual-level factors. This includes continued investment in human resource management and planning, and improving the overall provincial achievement in PMTCT ARV coverage.
2018-04-03T01:07:17.815Z
2017-01-18T00:00:00.000
{ "year": 2017, "sha1": "2ae69149029bd8674079b57c4140bd7cc72ad49a", "oa_license": "CCBYNCND", "oa_url": "https://europepmc.org/articles/pmc5351751?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "2ae69149029bd8674079b57c4140bd7cc72ad49a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
246787052
pes2o/s2orc
v3-fos-license
A robust embedded load cell sensor for tool life prognosis and smart sawing of medium carbon steel An embedded load cell sensor is proposed for the tool life prognosis and thrust force control of a band saw machine. The sensor enables the tool life and surface quality of the machined workpiece to be effectively improved through the use of a single sensing device strategically located in the cutting machine. The feasibility of the proposed sensor is demonstrated experimentally using a double-column horizontal sawing machine with medium carbon steel bars as the workpiece material. An investigation is performed into the effects of the cutting force, feed rate, and machining time on the machined workpiece’s tool wear and surface roughness. It is shown that the machined workpiece’s thrust force, tool wear, and surface roughness are strongly correlated and increase over time. Based on the experimental results, a feedback control system is proposed for maintaining a constant thrust force on the band saw during cutting under even the most challenging conditions. Overall, the results confirm that a single embedded load cell sensor located in a key position can provide effective force monitoring. Such force monitoring enables a control methodology to maintain the optimal cutting conditions in the sawing of medium carbon steel and improve the tool life and machined part quality. Introduction Sawing is an important manufacturing operation used in many industries to cut raw materials to a specific length before secondary precision manufacturing processes. There are three basic types of sawing process, namely band sawing, circular sawing, and hacksawing, where the choice between them depends primarily on the particular needs of the sawing task. Compared to circular sawing and hacksawing, band sawing achieves a lower kerf width, a higher metal removal rate, and an improved surface finish. As a result, it is frequently the method of choice for high-productivity sawing operations. However, such benefits are dependent on achieving a trouble-free operation of the sawing machine and its components. As for any machine tool, maintaining the cutting performance of band saw machines relies on a proper estimation of the tool life and working conditions [1][2][3][4][5]. Thus, monitoring the tool wear and surface quality of the sawed components, and adjusting the working conditions accordingly, is an important concern. The literature contains many studies on tool condition monitoring (TCM) and life prediction methods. For example, Jemielniak [6] compared the signals obtained from laboratory and industrial cutting force sensors and concluded that cross-talk between the channels had a significant effect on the accuracy of the cutting force measurements in both cases. Choudhury and Rath [7] proposed a method for estimating the tool wear in the milling process based on the relationship between the flank wear and average cutting force coefficients produced under different cutting speeds, depths of cut, and feed rates. Gao et al. [8] introduced a data-driven model framework for TCM based on a statistical analysis of the cutting force. The validity of the proposed method was demonstrated experimentally through the lathe turning of Inconel 718 workpieces. Freyer et al. [9] compared the effectiveness of two TCM strategies based on orthogonal and unidirectional cutting force measurements, respectively, and found that the probability of a difference of less than 5 percentage points between the flank wear estimation errors of the two methods was more than 95%. Kaya et al. [10] proposed an online TCM system for milling machines based on an analysis of the measured cutting force and torque by an artificial neural network (ANN). The proposed system was shown to achieve a high correlation rate and low error ratio between the actual and predicted values of the flank wear in the machining of Inconel 718. Garshelis et al. [11] developed a method for monitoring the cutting tool condition and operating parameters in a general machining process through an inspection of the magnetoelastic rate of change of a torque sensor signal. Ahmad et al. [12] used a threecomponent piezoelectric transducer to examine the effects of the machining parameters (i.e., the cutting speed and feed rate) and workpiece shape on the cutting performance of a band sawing machine with a variable pitch combination blade. Andersson et al. [13] detected the variation in the cutting force between the individual teeth of a band saw using a multi-sensor technique and proposed a method for quantifying these variations using a cutting force model based on positional errors of the cutting edge, changes in the tool dynamics during machining, and edge wear of the cutting tool. Thaler et al. [14] presented a method for characterizing the band sawing process based on an analysis of the cutting force signals. It was shown that the force signals provided useful insights into not only the blade geometry but also the homogeneity of the cut workpiece. It is well-known that the surface texture plays an important role in determining how a real workpiece will interact with its cutting condition using in-process monitoring. The surface texture of a finished geometrically defined component essentially represents the fingerprint of all the previous processing stages and is generally quantified by the surface roughness [15,16]. For machining processes, the surface roughness not only provides an important indication of the part quality but also yields valuable insights into the state of the manufacturing process and cutting tool. Consequently, monitoring and quantifying the surface roughness provides an effective approach for controlling the manufacturing process in such a way as to achieve the required degree of accuracy of the workpiece surface [17][18][19]. One of the most important factors affecting the surface roughness and machinability of the workpiece is the tool wear. In practice, the tool wear determines not only the surface roughness of the final part but also the cutting force and tool life. Hence, by monitoring the cutting force, it is possible to both evaluate the evolution of the tool wear throughout the manufacturing process and to estimate the effect of this wear on the surface finish. Various techniques based on multiple sensors have been proposed to monitor the process variables during machining to estimate the tool wear. Bhogal et al. [20] showed that the cutting speed was one of the most important factors affecting tool vibration, and therefore had a critical effect on the surface finish. Amin et al. [21] investigated the effect of the chatter amplitude on the surface roughness under various cutting conditions, and found that the correlation between the chatter amplitude and the surface roughness increased with an increased cutting speed. Arizmendi et al. [22] proposed a method for predicting the topography, surface roughness, and form errors produced in the peripheral milling process based on an analysis of the tool vibration. David et al. [23] showed that, in the end-milling process, a higher cutting depth and feed rate lead to an increased cutting force and vibrational amplitude, which resulted in turn in higher surface roughness. Zahoor et al. [24] evaluated the effects of the feed rate, axial depth of cut, and spindle vibration on the surface roughness and tool wear in the vertical milling of AISI P20 steel workpieces using a solid carbide cutter. The results showed that the surface roughness depended mainly on the vibration amplitude and depth of cut, respectively, while the tool wear was governed principally by the vibration amplitude and feed rate. In general, the accuracy of the prognostic estimation for the tool life of machine tools and surface quality of machined components is significantly dependent on the method used to collect and process the measurement data. Saxena et al. [25] revealed that the prediction discrepancy between different prognostics algorithms can be appeared due to the varied end-user requirements for different applications, time scales, available information, domain dynamics, etc. However, while the tool condition can be accurately assessed through the deployment of multiple sensors on the machine tool system, such an approach is costly and applicable only to laboratory settings. Accordingly, taking the case of a band saw machine for illustration purposes, the present study proposes the use of the embedded load cell sensor, strategically located within the machine tool, to predict the wear of the band saw blade and estimate the surface quality of the machined component based on the measured value of the thrust force acting on the blade. Specifically, the present sensor is developed to install close to the blade within the machine tool itself, and therefore enables the data, namely cutting force to be measured directly with a high degree of precision. Experimental trials are performed to investigate the effects of the cutting force, feed rate, and machining time on the tool wear and surface roughness of the machined workpiece, in which the tool wear level and wear area can be analyzed and estimated based on machine vision by an overlapping image technique with transparent adjustment. It is shown that the machined workpiece's thrust force, tool wear, and surface quality are strongly correlated. As a result, the thrust force measurements provide a viable approach for evaluating the tool's condition and predicting the tool life. Based on the experimental results, a feedback control system is developed for maintaining a stable thrust force on the band saw during cutting. The feasibility of the proposed approach is demonstrated experimentally through the machining of medium-and high-carbon steel workpieces with various cross-sections. Experimental setup The experiments were performed on a fully automatic doublecolumn horizontal band saw machine (E-530, Everising Machine Co., Taiwan), and involved sawing off small sections of carbon steel bars using a bi-metal band saw blade (HSS M42 cutting edge) spring steel backing material with a conventional raker tooth setting. Figures 1 and 2 present a series of photographs showing the machining setup and observation apparatus, respectively. The sensor proposed in the present study had the form of a small metal cylindrical bar with two embedded piezoelectric films (see Fig. 2a. The geometry and dimensions of the sensor, and the installation positions of the two films, were carefully designed by finite element method (FEM) simulations to achieve the optimal tradeoff between the rigidity of the sensor structure and the sensitivity of the force measurement results [26]. A sensor calibration procedure is carried out based on a calibration stand with weights. The calibration laboratory for weights is accredited to ISO/IEC17025 to ensure traceability of calibrated weights. As shown in the upper-left schematic in Fig. 2b, the sensor was installed within the machine tool itself by replacing one of the guide pins immediately above the band saw. During the sawing process, the thrust force exerted on the saw was transmitted mechanically to the force sensor through the structure of the machine tool, and the resulting signal was collected and analyzed by a multi-channel data logger (GL220, Graphtec Corporation, Japan). An industry-grade packaging technique was employed to protect the sensor from the harsh environment generated during the sawing process. A similar technique has also been utilized to produce the multidirectional force sensor for artificial or robotic finger applications by Lee et al. [27]. Having installed the sensor, experimental trials were performed to investigate the effects of the cutting force, feed rate, and machining time on the wear of the band saw and the surface roughness of the machined bars. To better understand the relationship between the machining time and tool deterioration of the band saw during the cutting process, a failure criterion of cutting tools was performed by using standard ISO 8688-2:1989 [28]. In addition, the machining process was real-time observed using a high-resolution image Fig. 1 System setup of sawing process monitoring capture technology [29] from the machine tool (see Fig. 1). The tool wear and surface roughness of the machined parts were estimated at room temperature using a 3D confocal microscope attached to a 5 MP digital camera with a spatial resolution of 0.04 μm, a vertical resolution of less than 0.01 nm, and a 20 × objective lens. A similar technique was also used by Jeng et al. [30] in the evaluation of the surface roughness of cold-rolled aluminum sheets. that labels a, b, and c indicate the wear-in region, steady-state wear region, and failure region, respectively.) Tool life evaluation The experiments commenced by measuring the thrust force acting on the band saw during the cutting of JIS S45C medium carbon steel workpieces with dimensions of 100 × 100 mm 2 . The experiments were performed under heavy cutting conditions with a cutting speed of 62 m/min and a cutting rate of 63 cm 2 ∕min , where the setting of operation parameters was determined and provided by the universityindustry collaboration (Everising Machine CO.). The cutting process was performed continuously over the full band saw life cycle, including the run-in stage, the steady-state cutting stage, the wear stage, and the failure stage. Figure 3 shows the variation of the thrust force over time (in seconds) for the different number of sections cut at an initial stage of sawing process. It is seen that the thrust force increases and becomes increasingly unstable as the number of repeated cuts increases. Figure 4 shows the variation of the average thrust force of a completer sawing over the complete band saw life cycle. As shown, the force response increases over time, and can be divided into three main regions, namely (a) an initial break-in region with a rapidly increasing wear rate, (b) a steady-state wear region with a uniform wear rate, and (c) a failure region with a rapidly increasing wear rate. It is noted that the measurement results are similar to those reported for milling operations by Groover [31]. Figures 5 and 6 show the face wear and flank wear, respectively, of the band saw blade in the run-in and failure stages of the band saw life cycle, where the tool wear level and wear area can be acquired and estimated based on machine vision by a high-resolution image capture technology [29,32,33]. Moreover, the amount of wear area in the captured image was calculated by overlapping images with transparent adjustment to assess the various wear stages. At the wear-in and final stages, the increased wear in both blade face and blade frank are more pronounced. As shown in Figs. 5a, b, and 6a, b, the corresponding chip shapes and tooth tip shapes are shown below each figure for comparison purposes. It is noted that the higher wear levels for both blade face and blade flank at the initial sawing state (i.e. the cutting slices ranging from 50 to 150) may be induced due to their deburring process for removing sharp external edges. Moreover, the measured thrust force profile shown in Fig. 4 provides a feasible means of estimating the onset of the break-in mechanism and wear behavior on the band saw during an overall cutting process. Figure 7 shows the variation in the workpiece surface roughness (R q ) with the number of sections cut from the bar under the optimal sawing conditions. Note that to ensure the reliability of the measurement results, eight sampling Fig. 6 The average wear area of blade flank, a tooth tip shape in the initial sawing stage, and b tooth tip shape in the final sawing stage lengths were performed on each workpiece, and the R q values were computed as the average value of the corresponding measurements, in which about 95% confidence of the values (i.e., two standard deviations) has been employed to provide a purpose of obtaining the normal distribution in the present measures, and thus the experimental errors can be effectively avoided. As expected, the surface roughness increases with an increasing number of cuts. Moreover, a sudden increase in the blade displacement is observed as the number of sections removed increases beyond 350 (i.e., the estimated life of the tool, see Figs. 4, 5, and 6). In general, the motor current signal is one of the widely accepted key indicators, albeit an experienced operator is often needed to filter out noise for an effective signal, which could show how tool machine cutting performance decrease over a lifetime. Several similar observations were reported in the tool-wear monitoring for metal cutting [34] and Kuntoğlu et al. [35]. To better understand the relationship between motor current and thrust force in metal cutting operations, Fig. 8 shows the variation of the motor current and the thrust force with the number of sections cut from the bar under the optimized sawing conditions. Note that the variation of motor current is reasonably insensitive to the number of sections cut from the bar. This phenomenon can be attributed to the fact that the motor current signal of the sawing machine is insensitive to the cutting force fluctuations under a higher cutting speed due to its vibration absorber and viscous damping system. It is found that both bandsaw blades have been conducted with the same cutting conditions to study tool wear during the sawing process, where samples 1 and 2 are in Fig. 8. Both the thrust force and the corresponding current increase with an increasing number of cuts since, as the saw tooth gradually wears, the energy required to cut the workpiece material increases, and hence the required thrust force also increases, thereby increasing the associated current and the root-mean-square surface roughness of the workpiece, in which several uncertainties of tool wear state estimations can be induced due to invalid data, environmental noises, and a single sensor involving multiple faults, etc. Cutting parameter optimization In practical sawing operations, it is desirable to set the cutting parameters (i.e., the cutting speed and cutting rate) in such a way as to achieve an acceptable tradeoff between the productivity of the machining process and the blade life. Accordingly, a further series of experiments was performed in which JIS S45C medium carbon steel bars with a diameter of 220 mm were sawed with gradually increasing cutting rates in the range of 85 to 135 cm 2 /min. To ensure the reliability of the experimental results, the cutting trials were performed three times for each cutting rate, with the corresponding thrust force and workpiece surface roughness measured each time. Figure 9 shows the variation in the measured thrust force overtime for the different cutting rates. As expected, the cutting force increases with an increased cutting rate due to the corresponding increase in the friction mechanism required to cut the workpiece material. Figure 10 shows the change in amplitude of the thrust force with the cutting rate. Once again, the amplitude of the thrust force increases with an increased cutting rate. Finally, Fig. 11 shows the variation of the R q with the cutting rate. The surface roughness increases only moderately as the cutting rate first increases from 85 cm 2 /min to 115 cm 2 /min. However, as the cutting rate is further increased to 115 cm 2 /min, a significant increase in the surface roughness occurs. Thus, the limiting value of the cutting rate for the S45C material during sawing was determined to be 115 cm 2 /min. Thrust force feedback control methodology In the process of sawing a bar with a constant and solid cross-section, the contact area first increases toward a maximum value as the blade penetrates the workpiece, and then decreases to zero as the blade leaves the workpiece. The change in contact area prompts a variation in the thrust force acting on the blade and therefore induces a change in the wear rate. Thus, to reduce the wear of the blade, it is desirable to minimize the variation of the thrust force such that it maintains an approximately constant value throughout the cutting process. Accordingly, the present study proposes a thrust force feedback control methodology in which the thrust force is constantly monitored by the embedded force sensor, and the machining conditions (e.g., the cutting speed and feed rate) are adjusted as required to maintain a constant force. The feasibility of the proposed control method was evaluated using JIS S45C and JIS S60C medium-and high-carbon steel bars with a diameter of 150 mm. Figure 12 shows the force-time diagram obtained in three tests performed using the JIS S45C workpiece without feedback control. It is clearly seen that the thrust force varies continuously as the sawing process proceeds. Figure 13 compares the set-point force value and actual force value when sawing the two workpieces (i.e., JIS S45C and JIS S60C carbon steel bars) using the proposed feedback control method. From inspection, the error between the two thrust force values is just 2% and 1.6% for the two materials, respectively. In other words, the feasibility of the proposed method is confirmed. Figure 14 shows the force-time diagrams obtained in the sawing of the JIS S45C workpiece with set-point thrust force values in the range of 200 to 400 N and the feedback control method applied. For comparison purposes, the force-time diagram obtained with the absence of feedback control is also shown. The results confirm that the thrust force remains approximately constant, even under higher thrust force setting points. Figure 15 presents the corresponding results obtained for the sawing of JIS S60C high carbon steel under thrust forces in the range of 500 to 900 N. It is again seen that the feedback control methodology achieves an approximately constant thrust force for each of the considered force settings. A final series of experiments was performed using JIS S45C O-beam and H-beam workpieces. The corresponding force-time diagrams are presented in Figs. 16 and 17, respectively. Note that the O beam had an outer diameter of 133 mm and a wall thickness of 1.4 mm, while the H beam had a height and width of 125 mm and a thickness of 10 mm. By comparing the response of force-time characteristics between the O and H beams, respectively, it is observed that the force amplitude of feedback control in the O beam is lower than that of the H beam due to its completely symmetrical cross-section. The results indicate that, for both beams, the thrust forces with feedback control are more stable than that without feedback control. This implies that the present results confirm the effectiveness of the proposed thrust control methodology even in the sawing of workpieces with more complex cross-sectional geometries. Conclusion This study has presented an embedded force sensor for detecting the cutting force generated during the cutting of medium carbon steel using a double-column horizontal sawing machine. The proposed sensor has been used to explore the effects of the cutting force, cutting rate, and machining time on the wear of the band saw blade and surface roughness of the workpiece. It has been shown that the cutting force increases with an increasing machining time and results in a corresponding increase in both the tool wear and the machined surface roughness. It has been further shown that the wear life of the band saw blade can be reliably predicted by a sudden increase in the measured cutting force or surface roughness. The developed force sensor has been used to realize a feedback control mechanism for maintaining a stable cutting force during the sawing of carbon steel workpieces with various carbon contents and cross-sections (bar, O beam, and H beam). The proposed sensor has many advantages for practical applications, including a low cost, a small size, and good robustness. Notably, the present article provides three key contributions to sawing process. First, a sensor is developed to install close to the blade within the machine tool itself, and therefore enables the cutting force to be measured directly with a high degree of precision. Second, the sensor allows the cutting force and cutting conditions to be monitored and controlled using only a single sensing device. It is thus suitable for practical, industrial applications. Finally, it establishes guidelines for estimating tool life and surface quality, which are intimately tied to the current and future research on smart sawing. Overall, the results show that the proposed sensor provides a highly effective method for not only monitoring the tool life condition of the band saw blade, but also for adjusting the machining parameters adaptively in such a way as to maintain a constant cutting force, thereby reducing the tool wear and improving the surface roughness of the machined components.
2022-02-13T16:23:46.977Z
2022-02-11T00:00:00.000
{ "year": 2022, "sha1": "75bf2502cf272c0b796543febf0b16ceeaa83a34", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00170-022-09377-9.pdf", "oa_status": "HYBRID", "pdf_src": "SpringerNature", "pdf_hash": "95608eb4dfd7ce0896c9a1f15d28255c4d3fb465", "s2fieldsofstudy": [ "Materials Science", "Business" ], "extfieldsofstudy": [] }
49870204
pes2o/s2orc
v3-fos-license
Slice Finder: Automated Data Slicing for Model Validation As machine learning (ML) systems become democratized, it becomes increasingly important to help users easily debug their models. However, current data tools are still primitive when it comes to helping users trace model performance problems all the way to the data. We focus on the particular problem of slicing data to identify subsets of the validation data where the model performs poorly. This is an important problem in model validation because the overall model performance can fail to reflect that of the smaller subsets, and slicing allows users to analyze the model performance on a more granular-level. Unlike general techniques (e.g., clustering) that can find arbitrary slices, our goal is to find interpretable slices (which are easier to take action compared to arbitrary subsets) that are problematic and large. We propose Slice Finder, which is an interactive framework for identifying such slices using statistical techniques. Applications include diagnosing model fairness and fraud detection, where identifying slices that are interpretable to humans is crucial. I. INTRODUCTION Machine learning (ML) systems [8] are becoming more prevalent thanks to a vast number of success stories. However, the data tools for interpreting and debugging models have not caught up yet and many important challenges exist to improve our model understanding after training [14]. One such key problem is to understand if a model performs poorly on certain parts of the data, hereafter also referred to as a slice. Example 1. Consider a Random Forest classifier that predicts whether a person's income is above or below $50,000 (UCI Census data [29]). Looking at Table I, the overall metrics may be considered acceptable, since the overall log loss (a widelyused loss metric for binary classification problem) is low for all the data (see the "All" row). However, the individual slices tell a different story. When slicing data by gender, the model is more accurate for Female than Male (the effect size defined in Section II captures this relation by measuring the normalized loss metric difference between the Male slice and its counterpart, the Female slice). The Local-gov White slice is interesting because the average loss metric is on par with Male, but the effect size is much smaller (by convention, d ≤ 0.3 is small). A small effect size means that the loss metric on Local-gov White is similar to the loss metric on other demographics (defined as counterparts in Section II). Hence, * Work done at Google Research. Slice Log if the log loss of a slice and that of the counterpart are not acceptable, then it is likely that the model is bad overall, not just on a particular subset. Lastly, we see that people with higher education degrees (Bachelors, Masters, Doctorate) suffer from worse model performance and their losses are higher than their counterparts and thus have higher error concentration. Thus, slices with high effect size are important for model validation, to make sure that the model do not underperform on certain parts of the data. The problem is that the overall model performance can fail to reflect that of smaller data slices. Thus, it is important that the performance of a model is analyzed on a more granular level. While a well-known problem [31], current techniques to determine under-performing slices largely rely on domain experts to define important sub-populations (or at least specify a feature dimension to slice by) [4], [23]. Unfortunately, ML practitioners do not necessary have the domain expertise to know all important under-performing slices in advance, even after spending a significant amount of time exploring the data. In this problem context, enumerating all possible data slices and validating model performance for each is not practical due to the sheer number of possible slices. Worse yet, simply searching for the most under-performing slices can be misleading because the model performance on smaller slices can be noisy, and without any safeguard, this leads to slices that are too small for meaningful impact on the model quality or that are false discoveries (i.e., non-problematic slices appearing as problematic). Ideally, we want to identify the largest and true problematic slices from the smaller slices that are not fully reflected on by the overall model performance metric. There are more generic clustering-based algorithms in model understanding [27], [32], [33] A good technique to detect problematic slices for model validation thus needs to find easy-to-understand subsets of data and ensure that the model performance on the subsets is meaningful and not attributed to chance. Each problematic slice should be immediately understandable to a human without the guesswork. The problematic slices should also be large enough so that their impact on the overall model quality is non-negligible. Since the model may have a high variance in its prediction quality, we also need to be careful not to choose slices that are false discoveries. Finally, since the slices have an exponentially large search space, it is infeasible to manually go though each slice. Instead, we would like to guide the user to a handful of slices that satisfy the conditions above. In this paper we propose Slice Finder, which efficiently discovers large possibly-overlapping slices that are both interpretable and actually problematic. A slice is defined as a conjunction of feature-value pairs where having fewer features is considered more interpretable. A problematic slice is identified based on testing of a significant difference of model performance metrics (e.g., loss function) of the slice and its counterpart. That is, we treat each problematic slice as a hypothesis and perform a principled hypothesis testing to check if it is a true problematic slice and not a false discovery by chance. We discuss the details in Section II. One problem with performing many statistical tests (due to a large number of candidate slices) is an increased number of false positives. This is what is also known as Multiple Comparisons Problem (MCP) [9]: imagine a test of Type-I error (false positive: recommending a non-problematic slice as problematic) rate of 0.05 (a common α-level for statistical significance testing); the probability of having any false positives blows up exponentially with the number of comparisons (e.g., 1−(1−0.05) 8 = 0.34, even for just 8 tests, but then, we may end up exploring hundreds and thousands of slices even for a modest number of examples). We address this issue in Section III-B. In addition to testing, the slices found by Slice Finder can be used to evaluate model fairness or in applications such as fraud detection, business analytics, and anomaly detection, to name a few. While there are many definitions for fairness, a common one is that a model performs poorly (e.g., lower accuracy) on certain sensitive features (which define the slices), but not on others. Fraud detection also involves identifying classes of activities where a model is not performing as well as it previously did. For example, some fraudsters may have gamed the system with unauthorized transactions. In business analytics, finding the most promising marketing cohorts can be viewed as a data slicing problem. Although Slice Finder evaluates each slice based on its losses on a model, we can also generalize the data slicing problem where we assume a general scoring function to assess the significance of a slice. For example, data validation is the process of identifying training or validation examples that contain errors (e.g., values are out of range, features are missing, and so on). By scoring each slice based on the number or type of errors it contains, it is possible to summarize the data errors through a few interpretable slices rather than showing users an exhaustive list of all erroneous examples. In summary, we make the following contributions: • We define the data slicing problem and the use of hypothesis testing for problematic slice identification (Section II) and false discovery control (Section III-B). • We describe the Slice Finder system and propose three automated data slicing approaches, including a naïve clustering-based approach as a baseline for automated data slicing (Section III). • We present model fairness as a representative use case for Slice Finder (Section IV). • We evaluate the three automated data slicing approaches using real datasets (Section V). A. Preliminaries We assume a dataset D with n examples and a model h that needs to be tested. Following common practice, we assume that each example x (i) F contains features F = {F 1 , F 2 , ..., F m } where each feature F j (e.g., country) has a list of values (e.g., {US, DE}) or discretized numeric value ranges (e.g., {[0, 50), [50, 100)}). We also have a ground truth label y (i) for each example, such that D = {(x (1) F , y (n) )}. The test model h is an arbitrary function that maps an input example to a prediction, and the goal is to validate if h is working properly for different subsets of the data. For ease of exposition, we focus on a binary classification problem (e.g., UCI Census income classification) with h that takes an example x (i) F and outputs a prediction h(x (i) F ) of the true label y (i) ∈ {0, 1} (e.g., a person's income is above or below $50,000). A slice S is a subset of examples in D with common features and can be described as a conjunction of the common feature-value pairs j F j op v j where the F j 's are distinct (e.g., country = DE ∧ gender = Male), and op can be one of =, <, ≤, ≥, or >. For numeric features, we can discretize their values (e.g., quantiles or equi-height bins) and generate ranges so that they are effectively categorical features (e.g., age = [20,30)). Numeric features with large domains tend to have fewer examples per value, and hence do not appear as significant. By discretizing numeric features into a set of continuous ranges, we can effectively avoid searching through tiny slices of minimal impact on model quality and group them to more sizable and meaningful slices. We also assume a classification loss function ψ(S, h) that returns a performance score for a set of examples by comparing h's prediction h(x (i) F ) with the true label y (i) . A common classification loss function is logarithmic loss (log loss), which in case of binary classification is defined as: The log loss is non-negative and grows with the number of classification errors. A perfect classifier h would have log loss of zero, and a random-guesser (h(x) = 0.5) log loss of −ln(0.5) = 0.693. Also note that our techniques and the problem setup can easily generalize to other ML problem types (e,g., multi-class classification, regression, etc.) with proper loss functions/performance metrics. B. Problematic Slice as Hypothesis We define a slice to be problematic if the classification loss function takes vastly different values between the slice and its counterpart. The counterpart slice serves as a reference to which we measure how problematic is S, and the definition depends on the problem in hand. For instance, in the most general case where user wants to validate if the model underperforms on any data slices, we define the counterpart as the complement of S (S = D − S) and consider the difference ψ(S, h) − ψ(S , h) assuming ψ is a loss function, such as a log loss. (The definition of counterpart can change in other scenarios as we explain later.) This effectively allows us to identify S with a higher error concentration for h (i.e., most erroneous examples are contained in S and not in S ), which should deserve the user's attention for deeper analysis. Finding real problematic slices for model validation is nontrivial, mainly because it requires to balance between the magnitude of the difference in loss function values and the size of the slice. That is, a problematic slice must contain more erroneous examples (i.e., model performs worse) in relation to the rest of data, and it should also be large enough to have a meaningful impact on model quality. In some applications, each example may also have a weight, which reflects its importance. As a result, a slice with few examples can still be considered important due to its large weight sum. In the remainder of the paper, we will assume that weights are always 1, but extending to varying weights is straightforward. Interestingly, larger slices tend to have performance metric (i.e., loss function value) similar to that of the overall dataset with a smaller variance; thus, the difference tends to be smaller. Notice that we are looking at a one-sided difference ψ(S, h)−ψ(S , h), so any large negative difference values with extreme counterpart ψ(S , h) are not of interest. On the other hand, if a larger slice has high positive ψ(S, h) − ψ(S , h), then the signal is more likely to be real and deserves the user's attention. Based on the previous points, one possible approach to identifying problematic slices would be to rank each slice based on some heuristic combination of its size and difference in average losses. However, such a heuristic is hard to tune and not even practical, assuming that we want a solution that can work with any validation data, model and loss functions. Our solution is to instead treat each problematic slice as a hypothesis and perform a testing for the strength of the signal (ψ(S, h) − ψ(S , h)) and its statistical significance: is the observed difference simply by chance or for real?. The definition of problematic slices with respect to its counterparts is general and thus applicable across domains. In addition, the definition naturally translates into hypothesis testing with a null and an alternative hypothesis: The test accepts S as problematic if it has a large difference and large enough support (the number of examples). The testing is performed based on a standardized score φ of the difference by the pooled standard deviations of ψ(S, h) and ψ(S , h), σ S and σ S respectively (a.k.a., effect size [1]): The effect size directly measures the strength of the signal (i.e., how problematic the slice is) with respect to the distribution of the loss differences, and the testing ensures that the observed signal is not by chance. The effect size is also a standardized score, for which we consider 0.2 to be small, 0.5 medium, 0.8 large, and 1.3 very large (Cohen's convention [12]). Slice Finder brings the user's attention to a handful of the largest problematic slices, by taking all problematic slices S with effect size φ > T and ranking them by size (number of examples). Slice Finder provides a slider for the effect size threshold T for user to explore slices with different degrees of problematic-ness (Section III-C). It is also important to note that the power (i.e., probability of detecting false positives) of testing depletes quickly as we perform numerous tests (a lot of candidate slices); we address this issue in Section III-B. Lastly, our definition of problematic slice is also applicable to another common scenario, where a modeler wants to check if any sub-populations would experience degraded performance if she switches h to h (i.e., is a new model h safe to push?). In this case, we simply evaluate S with two different models and consider ψ(S, h ) − ψ(S, h), with an alternative hypothesis, H a : ψ(S, h ) > ψ(S, h). Here the counterpart of S using model h is the same slice S using model h . C. Data Slicing Problem The goal of Slice Finder is to identify a handful (e.g., top-K) of the largest problematic slices. Larger problematic slices are preferable because they carry more examples for illustrating the model quality issue, and thus, have more impact on the model quality. On the other hand, the model performance on a tiny slice does not provide much information since it may well be statistically insignificant (i.e., due to noise) and debugging the model on such a tiny slice would not change much the overall model quality. In addition, fewer features are preferred to make the problematic slices more interpretable. For example, country = DE is more interpretable than country = DE ∧ age = 20-40 ∧ zip = 12345. Problem 1. Given a positive integer K and threshold T , the data slicing problem is defined as finding the top-K largest slices such that: • Each slice has an effect size at least T , • The effect size is statistically significant, • No slice can be replaced with another with the same size, but with fewer features. Note that the top-K slices do not have to be distinct, e.g., country = DE and education = Bachelors overlap in the demographic of Germany with a Bachelors degree. III. SYSTEM ARCHITECTURE Underlying the Slice Finder system is an extensible architecture that combines automated data slicing and interactive visualization tools. The system is implemented in Python (for a single node processing and the front-end) and C++ (to run the Slice Finder lattice search on a distributed processing framework such as Flume [10]). Slice Finder loads the validation data set into a Pandas DataFrame [30]. The DataFrame supports indexing individual examples, and each data slice keeps a subset of indices instead of a copy of the actual data examples. Slice Finder provides basic slice operators (e.g., intersect and union) based on the indices; only when evaluating the ML model on a given slice Slice Finder accesses the actual data by the indices to test the model. The Pandas library also provides a number of options to deal with dirty data and missing values, and for the work presented here, we dropped NaN (missing values) or any values that deviate from the column types. Once data is loaded into a DataFrame, Slice Finder processes the data to identify the problematic slices and allow the user to explore them. This process comprises three major components, summarized below. Slice Finder searches for problematic slices either by training a CART decision tree around mis-classified examples or by performing a more exhaustive search on a lattice of slices. Both search strategies progress in a top-down manner until they find top-k large problematic slices with φ ≥ T . The decision tree approach materializes the tree model and traverses to extract nodes for different request queries (with different k and T ). In lattice searching, Slice Finder materializes all the candidate slices, even non-problematic slices. This allows Slice Finder to quickly respond to a new request with different T or continue searching with more filter clauses. As Slice Finder searches through a large number of slices, some slices might appear problematic by chance (i.e., multiple comparisons problem [17]). Slice Finder controls such a risk, by applying a marginal false discovery rate (mFDR) controlling procedure [17]. Slice Finder compiles a final top-k recommendation list with only statistically significant problematic slices. Lastly, even a handful of problematic slices can still be overwhelmingly large, since the user needs to take an action (e.g., deeper analyses or model debugging) on each slice. Hence, it is important to enable the user to quickly browse through the slices by their impacts (size) and scores (effect size). To this end, Slice Finder allows the user to explore the recommended slices with interactive visualization tools. The following subsections describe each component in detail. A. Automated Data Slicing As mentioned earlier, the goal of this component is to automatically identify problematic slices for model validation. To motivate the development of the two techniques that we mentioned (decision trees and lattice search), let us first consider a simple baseline approach that identifies the problematic slices through clustering. And then, we discuss two automated data slicing approaches used in Slice Finder that improve on the clustering approach. 1) Clustering: The idea is to cluster similar examples together and take each cluster as an arbitrary data slice. If a test model fails on any of the slices, then the user can examine the data examples within or run a more complex analysis to fix the problem. This is an intuitive way to understand the model and its behavior (e.g., predictions) [27], [32], [33]; we can take a similar approach to the automated data slicing problem. The hope is that similar examples would behave similarly even in terms of data or model issues. Clustering is a reasonable baseline due to its ease of use, but it has major drawbacks: first, it is hard to cluster and explain high dimensional data. We can reduce the dimensionality using principled component analysis (PCA) before clustering, but many features of clustered examples (in its original feature vector) still have high variance or high cardinality of values. Unlike an actual data slice filtered by certain features, this is hard to interpret unless the user can manually go through the examples and summarize the data in a meaningful way. Second, the user has to specify the number of clusters, which affects crucially the quality of clusters in both metrics and size. As we want slices that are problematic and large (more impact for model quality), this is a key parameter which is hard to tune. The two techniques that we present next overcome these deficiencies of clustering. The first technique is based on decision-trees that capture the distribution of classification results. Here the effect sizes are large, but the slices may be smaller as a result. In contrast, the second technique, called lattice searching, focuses on slices that are neither too small nor large, but have large-enough effect sizes. 2) Decision Tree Training: To identify more interpretable problematic slices, we train a decision tree that can classify which slices are problematic. The output is a partitioning of the examples into the slices defined by the tree. For example, a decision tree could produce the slices For numeric features, this kind of partitioning is natural. For categorical features, a common approach is to use one-hot encoding where all possible values are mapped to columns, and the selected value results in the corresponding column to have a value 1. To use a decision tree, we first identify the bottom-most problematic slices (leaves) with the highest effect size (i.e., highest error concentration). Then we can go up the decision tree to find larger (and more interpretable) slices that generalize the problematic slices, which still have effect size larger than a user-specified effect size threshold, T . The advantage of decision trees is that they have a natural interpretation, since the leaves correspond directly to slices. The downside of using a tree is that it only finds nonoverlapping slices that are problematic. In addition, if the decision tree gets too deep with many levels, then it starts to become uninterpretable as well [18]. The Decision Tree approach can be viewed as "greedy" because it optimizes on the classification results and is thus not designed to exhaustively find all problematic slices according to Definition 1. For example, if some feature is split on the root node, then it will be difficult to find single-feature slices for other features. In addition, a decision tree always partitions the data, so even if there are two problematic slices that overlap, at most one of them will be found. Hence, a more exhaustive approach is needed to ensure all possiblyoverlapping problematic slices are found. 3) Lattice Searching: The lattice searching approach considers a larger search space where the slices form a lattice, and problematic slices can overlap with one another. We assume that slices only have equality predicates, e.g., i F i = v i . In contrast with the decision tree training approach, lattice searching can be more expensive because it searches overlapping slices. Figure 2 illustrates how the slices are organized as a lattice. The key intuition is to perform a breadth-first search and The input is the training data, a model, and an effect size threshold T . As a pre-processing step, Slice Finder takes the training data and discretizes numeric features. For categorical features that contain too many values (e.g., IDs are unique for each example), Slice Finder uses a heuristic where it considers up to the N most frequent values and places the rest into an "other values" bucket. The possible slices of these features form a lattice where a slice S is a parent of every S with exactly one more feature-value pair. Slice Finder finds the top-K largest problematic slices by traversing the slice lattice in a breadth-first manner using a priority queue. The priority queue contains the current slices being considered sorted by descending size and then by ascending number of features. For each slice i∈I F i = v i that is popped, Slice Finder checks if it has an effect size at least T . If so, the slice is added to the top-K list. Otherwise, the slice is expanded where the slices { i∈I F i = v i ∧ G = v|G ∈ F − {F 1 , . . . , F |I| }, v ∈ G s values} are added to the queue. Slice Finder optimizes this traversal by avoiding slices that are subsets of previously identified problematic slices. The intuition is that any subsumed (expanded) slice contains a subset of the same exact examples of its parent and is smaller with more filter predicates (less interpretable); thus, we do not expand larger and already problematic slices. By starting from the base slices (with single filter predicate/clause) and expanding only non-problematic slices with one additional predicate at a time (i.e., top-down search from lower order slices to higher order slices), we can generate a superset of all candidate slices. This is similar to Apriori fast frequent itemset mining algorithm [5], where only large (d − 1)-itemsets are joined together to generate a superset of all large d-itemsets. This process repeats until either the top-K slices have been found or there are no more slices to explore. = a 1 , B = b 1 , B = b 2 , and C = c 1 , which are inserted back into the queue. Among them, suppose A = a 1 is the largest slice with an effect size at least T . Then this slice is popped from Q and added to the top-K result. Suppose that no other slice has an effect size at least T , but B = b 1 is the largest. This slice is then expanded to B = b 1 ∧ C = c 1 (notice that B = b 1 ∧ A = a 1 is unnecessary because it is a subset of A = a 1 ). If this slice has an effect size at least T , then the final result is The following theorem formalizes the correctness of this algorithm for the slice-identification problem. Theorem 1. The Slice Finder slices identified by Algorithm 1 satisfy Definition 1. Proof. Since we only add slices with effect size at least T to the priority queue, the first condition is satisfied trivially. The second condition can be proven to hold using contradiction. Suppose a slice S that is popped from the queue has a large enough effect size, but there is another slice S that has not yet been added to the result, but has the same size with fewer features and should have been added to the result first. However, the ancestors of this slice must have been all popped and expanded before S was popped. In addition, since S has fewer features than S, it should have been placed before S in the queue (hence the contradiction). 4) Scalability: Slice Finder optimizes its search by expanding the filter predicate by one additional feature/value at a time (top-down strategy). Unfortunately, this does not solve the scalability issue of the data slicing problem completely, and Slice Finder could still search through an exponential number of slices, especially for big high-dimensional data sets. To this end, Slice Finder implements two approaches that can speed up the search. Parallelization: For lattice searching, evaluating a given model on a large number of slices one-by-one (sequentially) can be very expensive. So instead, Slice Finder distributes the slice evaluation jobs (lines 5-10 in Algorithm 1) by keeping separate priority queues Q d for the different number of filter predicates d. The idea is that workers take slices from the current Q d in a round-robin fashion and evaluate them asynchronously; the workers push the next candidate slices { i∈I F i = v i ∧ G = v|G ∈ F − {F 1 , . . . , F |I| }, v ∈ G s values} with one additional filter clause G to Q d+1 as they finish evaluating the slices. Once done with Q d (i.e., Q d is empty and |S| ≤ K), Slice Finder moves onto the next queue Q d+1 and continue searching until |S| ≥ K. Keeping slices of different d in separate queues allows multiple workers to evaluate multiple slices in parallel, without having to worry about redundant discoveries because only slices with d + 1 predicates can be subsumed by slices with d predicates. The added memory and communication overheads are negligible, especially, with respect to the slice evaluation time. On the other hand, for DT, our current implementation does not support parallel learning algorithms for constructing trees. But, there exist a number of highly parallelizable learning processes for decision trees [35], which Slice Finder could implement to make DT more scalable. Sampling: We take a smaller sample to run Slice Finder if the original data set is too large. Note that the run time is linearly proportional to the size of sample, assuming that the run time for the test model is constant for each example. Taking a sample, however, comes with a cost. Namely, we run the risk of false positives (non-problematic slices that appear problematic) and false negatives (problematic slices that appear non-problematic or completely disappear from the sample) due to a decreased number of examples. Since we are interested in large slices that are more impactful to model quality, we can disregard false negatives that disappeared from the sample. Furthermore, we perform significance testing to filter slices that falsely appear as problematic or nonproblematic (Section III-B). B. False Discovery Control As Slice Finder finds more slices for testing, there is also the danger of finding more "false positives," which are slices that are not statistically significant. Slice Finder controls false positives (Type-1 errors) in a principled fashion using αinvesting [17]. Given an alpha-wealth (overall Type I error rate) α, α-investing spends this over multiple comparisons, while increasing the budget α towards the subsequent tests with each rejected hypothesis. This so called pay-out (increase in α) helps the procedure become less conservative and puts more weight on more likely to be faulty null hypotheses. More specifically, an alpha-investing rule determines the wealth for the next test in a sequence of tests. This effectively controls marginal false discovery rate at level α: Here, V is the number of false discoveries and R the number of total discoveries returned by the procedure. Slice Finder uses α-investing, mainly because it allows more interactive multiple hypothesis error control, namely, with an unspecified number of tests in any order. On the contrary, more restricted multiple hypothesis error control techniques, such as Bonferroni correction and Benjamini-Hochberg procedure [9] fall short as they require the total number of tests m in advance or become too conservative as m grows large. There are different α-investing policies for testing a sequence of hypotheses. In particular, our exploration strategy orders slices by their significance (t-score) and test hypotheses believed most likely to be rejected. This is called Best-footforward policy; we test the seemingly more significant slices with more power, and continue testing the rest only if we have left over α-wealth. The successful discovery of significant slices earns extra testing power (alpha-wealth), helping us to continue testing until there is no remaining wealth. C. Interactive Visualization Tool Slice Finder interacts with users through the GUI in Figure 3. A: On the left side is a scatter plot that shows the (size, effect size) coordinates of all slices. This gives a nice overview of top-k problematic slices, which allows the user to quickly browse through large and also problematic slices and compare slices to each other. B: Whenever the user hovers a mouse over a dot, the slice description, size, effect size, and metric (e.g., log loss) are displayed next to it. If a set of slices are selected, their details appear on the table on the right-hand side, C: On the table view, the user can sort slices by any metrics on the table. On the bottom, D: Slice Finder provides configurable sliders for adjusting k and T . Slice Finder materializes all the problematic slices (φ ≥ T ) as well the non-problematic slices (φ < T ) searched already. If T decreases, then we just need to reiterate the slices explored until now to find the top-K slices. If T increases, then the current slices may not be sufficient, depending on k, so we continue searching the slice lattice. This is possible because Slice Finder looks for top-k problematic slices in a top-down manner. IV. USING Slice Finder FOR MODEL FAIRNESS In this section, we look at model fairness as a use case of Slice Finder where identifying problematic slices can be a preprocessing step before more sophisticated analysis on fairness on the slices. As machine learning models are increasingly used in sensitive applications, such as predicting whether individuals will default on loans [21], commit crime [2], or survive intensive hospital care [19], it is essential to make sure the model performs equally well for all demographics to avoid discrimination. However, models may fail this property for various reasons: bias in data collection, insufficient data for certain slices, limitations in the model training, to name a few cases. Model fairness has various definitions depending on the application and is thus non-trivial to formalize (see recent tutorial [6]). While many metrics have been proposed [15], [16], [21], [24], there is no widely-accepted standard, and some definitions are even at odds. In this paper, we focus on a relatively common definition, which is to find of data where the model performs relatively worse using some of these metrics, which fits nicely into the Slice Finder framework. Using our definition of fairness, Slice Finder can be used to quickly identify interpretable slices that have fairness issues without having to specify the sensitive features in advance. Here, we demonstrate how Slice Finder can be used to find any unfairness of the model with equalized odds [21]. Namely, we explain how our definition of problematic slice using effect size also conforms to the definition of equalized odds. Slice Finder is also generic and supports any fairness metric that can be expressed as a scoring function. Any subsequent analysis of fairness on these slices can be done afterwards. Equalized odds requires a predictorŶ (e.g., a classification model h in our case) to be independent of protected or sensitive feature values A ∈ {0, 1} (e.g., gender = Male or gender = Female) conditional on the true outcome Y [21]. In binary classification (y ∈ {0, 1}), this is equivalent to: (3) Notice that equalized odds is essentially matching true positive rates (tpr) in case of y = 1 or false negative rates (fnr) otherwise. Slice Finder can be used to identify slices where the model is potentially discriminatory; an ML practitioner can easily identify feature dimensions of the data, without having to manually consider all feature value pair combinations, on which a deeper analysis and potential model fairness adjustments are needed. The problematic slices with φ > T suffer from higher loss (lower model accuracy in case of log loss) compared to the counterparts. If one group is enjoying a better rate of accuracy over the other, then it is a good indication that the model is biased. Namely, accuracy is a weighted sum of tpr and fnr by their proportions, and thus, a difference in accuracy means there are differences in tpr and false positive rate (f pr = 1−tpr), assuming there are any positive examples. As equalized odds requires matching tpr and fpr between the two demographics (a slice and its counterpart), Slice Finder using log loss ψ can identify slices to show that the model is potentially discriminatory. In case of the gender = Male slice above, we flag this as a signal for discriminatory model behavior because the slice is defined over a sensitive feature and has a high effect size. There are other standards, but equalized odds ensures that the prediction is non-discriminatory with respect to a specified protected attribute (e.g., gender), without sacrificing the target utility (i.e., maximizing model performance) too much [21]. V. EXPERIMENTS In this section, we compare the two Slice Finder approaches (decision tree and lattice search) with the baseline (clustering-based approach). We address the following key questions using both real-world and simulated ML problems: • What are the trade-offs between the three automated slicing approaches? • What do we gain for being more exhaustive and searching for overlapping slices (lattice search)? • Are the slices interpretable and actionable? • How efficient are the techniques? A. Experimental Setup We used the following three problems with different datasets and models to compare how three different automated slicing techniques perform in terms of recommended slice quality as well as their interpretability. For all experiments, we run the k-means, decision tree, and lattice search algorithms to recommend top-k slices for model validation with the full data set as described/processed below, except for the scalability experiments (Section V-D, where we used samples): Census Income Classification: We trained a random forest classifier (Example 1) to predict whether the income exceeds $50K/yr based on UCI census data [26]. There are 15 features and 30K examples. Credit Card Fraud Detection: We trained a random forest classifier to predict fraudulent transactions among credit card transactions [13]. This dataset contains transactions that occurred over two days, where we have 492 frauds out of 284k transactions (examples), each with 29 features. Because the data set is heavily imbalanced, we first undersample nonfraudulent transactions to balance the data. This leaves a total of 984 transactions in the balanced dataset. Click-Through-Rate Prediction: This dataset is proprietary and is used to train neural models for predicting user clicks on an app store. There are several hundred features, but we take a subset of 28 features and train on 50K examples. Figure 4 and Figure 5 show how lattice search (LS) and decision tree (DT) approaches outperform the baseline (CL) in terms of slice size and effect size. The clustering baseline approach produces large clusters that have very low effect size. When comparing DT and LS, LS produces larger slices with lower effect sizes (all above the minimum effect size threshold, T = 0.4). This result indicates that LS is good at finding large slices with enough effect sizes whereas DT finds smaller slices with very high effect sizes. Notice that the average effect size of CL recommended slices are around 0.0 (and sometimes even negative, which means the slices are not problematic), which illustrates that grouping similar examples does not guide users to problematic data slices. B. Large Problematic Slices LS considers all the possible slices above a effect size threshold T from top to bottom (i.e., searches slices with a fewer filter clauses first); LS will continue searching until it finds all k problematic slices (or it runs out of candidates). As the search progresses, LS looks at smaller slices with more filter clauses, and this is why LS tends to recommend larger slices just above T . On the other hand, DT slices data in a way that explains misclassified examples best. That is, decision boundaries are formed just around any groups of misclassified examples as long as their size is above the minimum leaf size; this behavior allows for high effect size slices. It is interesting to see in Figure 5(b) that DT yields much larger slices than LS. This is because the dataset consists of only numeric (continuous) features. Table II shows the top-3 largest problematic slices among the 10 recommended slices by LS and DT. LS discretize the numeric features into continuous ranges (e.g., 10 quantiles), whereas DT simply groups misclassified examples at discontinuous value ranges (value ranges are more dynamic at a finer and varying granularity). In general, LS recommends larger slices and DT more problematic slices by overfitting the decision boundaries with more complex filter predicates. Note that the Credit Card Fraud Detection slices are not easily interpretable because all the feature names are encrypted (e.g., V1, V2, ...). C. Adjusting T We show performance results for updating the top-K results when the effect size threshold T is adjusted using the slider. It is important to note that we do not need to retrain CL or DT (assuming that we grew the tree to a great enough depth; here, we grew DT with maximum depth of 20 and minimum leaf size of 10). In case of LS, we first run an initial lattice search (e.g., with T = 0.5 and k = 30) and materialize all the rejected candidate slices. In this way, we can simply look through the materialized slices for top-k largest slices with effect size above any T < 0.5 (increasing T may require additional lattice search). Figure 6 shows how average effect size and average slice size of LS and DT change over different T values. LS is much more sensitive to T because it tends to identify larger slices just above the minimum effect size threshold. On the other hand, DT generally recommends high effect size slices, thus, the recommendations are the same for the most part (0.1 ≤ T ≤ 0.6). We exclude CL because T is not enforced on clusters. D. Scalability Slice Finder uses sampling (DT and LS) and parallelization (LS) to be more scalable, especially with big, high-dimensional data sets. Figure 7(a) illustrates how Slice Finder scales with increasing sample sizes (using a single node/worker). The original Census Income Classification data set contains 30K examples with 15 features (both continuous and categorical). The run time of LS increases almost linearly with the increasing sample size. DT also runs faster with a smaller sample, but runs slower than LS because DT always grows a max-depth tree (a modified version of Classification & Regression Tree algorithm) before traversing it for top-k problematic slices. We also look at recall, which measures how many of the top-10 large problematic slices based on the full data set are missing from each top-10 slices from a smaller sample. LS retains more than a half of the top-10 large problematic slices even with a 10% sample, and this is acceptable as the goal of Slice Finder is to surface a handful of large problematic slices to users for deeper analysis. As DT's decision boundaries are formed to best explain groups of mis-classified examples, the boundaries (i.e., filter predicates for slices) vary for different samples; the recall is 0 (no match) for all samples and 1 (perfect match) for the full data set. Figure 8(a) illustrates how Slice Finder can scale with parallelization. LS can distribute the evaluation (e.g., effect size computation) of the slices with the same number of filter predicates to multiple workers, and for the same Census Income Classification data set (sampling fraction= 1.0) increasing the number of workers results in better run-time. Notice that the marginal run-time improvement decreases as we add more workers. The reported results are not DT is not shown here because the current implementation does not support parallel DT model training. E. False Discovery Control Even for a small data set (or sample), there can be an overwhelming number of problematic slices. The goal of Slice Finder is to bring the user's attention to a handful of large problematic slices; however, if the sample size is small, most slices would contain a fewer examples, and thus, it is likely that a lot of slices and their effect size measures are seen by chance. In such a case, it is important to prevent false discoveries (e.g., non-problematic slices appear as problematic ( φ ≥ T ) due to sampling bias). Figure 8(b) illustrates this in credit card fraud detection problem where most problematic slices are similar in sizes, and yet, many of them are false discoveries (the rejected). The average size for accepted (statistically significant) problematic slices was 7.86 and 8.36 for the rejected. Therefore, without a proper control over false discoveries, Slice Finder can recommend falsely identified problematic slices. F. Interpretability Users want to see data slices that are easy to understand with a few common features. In other words, the performance metrics or presenting a cluster of mis-classified examples are not sufficient to understand and describe the model behavior. In practice, a user often goes through all the mis-classified examples (or clusters of them) manually to describe/understand the problem. To this end, Slice Finder can be used as a preprocessing step to quickly identify data slices where the model might be biased or failing, and the slices are easy to describe with a few number of common features. Table III shows top-10 largest problematic slices in the Census Income Classification; the slices are easy to interpret with a few number of common features. We see that Sex = Male slice has effect size above T = 0.4 and contains a lot of examples indicating that the model can use improvements for this slice. It is also interesting to see that the model fails for some sub-demographics of Capital Gain = 0, especially those who are likely to make more money (e.g., work overtime, exec-managerial or selfemployed). We also see that slices associated with high education degrees tend to be problematic down the list (not shown in the top-10, except Captial Gain = 0, Education=Masters = or Education-Num = 14). For Click-Through-Rate Prediction also, Slice Finder shows human-readable feature descriptions of problematic slices that partition the data in a way that the slice contains more mis-classified examples than the rest of the data (counterpart). We do not show the slice descriptions because the information is proprietary. VI. RELATED WORK In practice, the overall performance metrics can mask the issues on a more granular-level, and it is important to validate the model accordingly on smaller subsets/sub-populations of data (slices). While a well-known problem, the existing tools are still primitive in that they rely on domain experts to pre-define important slices. State-of-art tools for ML model validation include Facets [3], which can be used to discover bias in the data, TensorFlow Model Analysis (TFMA), which slices data by an input feature dimension for a more granular performance analysis [4], and MLCube [23], which provides manual exploration of slices and can both evaluate a single model or compare two models. While the above tools are manual, Slice Finder complements them by automatically finding slices useful for model validation. There are also several other relevant lines of work related to this problem, and here we list the most relevant work to Slice Finder. Data Exploration: Online Analytical Processing (OLAP) has been tackling the problem of slicing data for analysis, and the techniques deal with the problem of large search space (i.e., how to efficiently identify data slices with certain properties). For example, Smart Drilldown [22] proposes an OLAP drill down process that returns the top-K most "interesting" rules such that the rules cover as many records as possible while being as specific as possible. Intelligent rollups [34] goes the other direction where the goal is to find the broadest cube that share the same characteristics of a problematic record. In comparison, Slice Finder finds slices, on which the model under-performs, without having to evaluate the model on all the possible slices. This is different from general OLAP operations based on cubes with pre-summarized aggregates, and the OLAP algorithms cannot be directly used. Model Understanding: Understanding a model and its behavior is a broad topic that is being studied extensively [7], [18], [28], [32], [33], [36]. For example, LIME [32] trains interpretable linear models on local data and random noise to see which feature are prominent. Anchors [33] are highprecision rules that provide local and sufficient conditions for a black-box model to make predictions. In comparison, Slice Finder is a complementary tool to provide part of the data where the model is performing relatively worse than other parts. As a result, there are certain applications (e.g., model fairness) that benefit more from slices. PALM [27] isolates a small set of training examples that have the greatest influence on the prediction by approximating a complex model into an interpretable meta-model that partitions the training data and a set of sub-models that approximate the patterns within each pattern. PALM expects as input the problematic example and a set of features that are explainable to the user. In comparison, Slice Finder finds slices with high effective sizes and does not require any user input. Influence functions [25] have been used to compute how each example affects model behavior. In comparison, Slice Finder identifies interpretable slices instead of individual examples. An interesting direction is to extend influence functions to slices, to quantify the impact of each slice on the overall model quality. Feature Selection: Slice Finder is a model validation tool, which comes after model training. It is important to note that this is different from feature selection [11], [20] in model training, where the goal is often to identify and (re-)train on the most correlated features (dimensions) to the target label (i.e., finding representative features that best explain model predictions). Instead, Slice Finder identifies a few common feature values that describe subsets of data with significantly high error concentration for a given model; this, in turn, could help the user to interpret hidden model performance issues that are masked by good overall model performance metrics. VII. CONCLUSION We have proposed Slice Finder as a tool for efficiently finding large, problematic, and interpretable slices. The techniques are relevant to model validation in general, but also to model fairness and fraud detection where human interpretability is critical to understand model behavior. We have proposed two methods for automated data slicing for model validation: decision tree training, which is efficient and finds slices defined as ranges of values, and slice lattice search, which can find overlapping slices and are more effective for categorical features. We also provide an interactive visualization front-end to help user quickly browse through a handful of problematic slices. In the future, we would like to improve Slice Finder to better discretize numeric features and support the merging of slices. We would also like to deploy SliceFinder to products and conduct a user study on how helpful the slices are for explaining and debugging models.
2018-07-20T00:21:14.088Z
2018-07-16T00:00:00.000
{ "year": 2018, "sha1": "4299f004df8a927dc4132f5f9a98be574c74c2a7", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "7bcebf742de1022bbe1400f10289337fa47acfb6", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
17222943
pes2o/s2orc
v3-fos-license
Perinatal depression and associated factors among reproductive aged group women at Goba and Robe Town of Bale Zone, Oromia Region, South East Ethiopia Background In sub Saharan Africa little progress has been made towards achieving the Millennium Development Goals. Lack of achievement of MDGs is reflected in only minor changes in maternal mortality and child health – this is especially true in Ethiopia. Perinatal depression is common in developing countries where one in three women has a significant mental health problem during pregnancy and after childbirth. Perinatal depression is associated with inadequate prenatal care and poor maternal weight gain, low infant birth weight, and infant growth restriction. This study determined the prevalence of perinatal depression and its associated factors among reproductive age group women at Goba and Robe town of Bale zone; Oromia Region, South East Ethiopia. A cross sectional study with Simple Random sampling was employed to include 340 eligible subjects. The WHO self reporting questionnaire with 20 items with a cut off point 6 and above was used to separate non-cases/cases of perinatal depression. Data were collected by trained data collectors. Descriptive analysis was done using SPSS Version 16. Multivariate logistic regression was used to identify independent predictors of perinatal depression at 95% CI and P value of ≤ 0.05. Results Prevalence of perinatal depression was about 107(31.5%). About 20(5.9%), 86(25.3%) were current smokers and alcohol consumers respectively. Two hundred seventy seven (71.2%) of the respondents reported husband support during their pregnancy and after birth and 195(59.3%) were reported support from the husband’s family/relatives. Maternal perceived difficulty of child care, family History of mental illness, family visit during the perinatal period, history of child death and husband smoking status were found as independent predictors of perinatal depression. Conclusion This study found that 1 in 3 women in this region of Ethiopia have depression. Depression screening is not currently routine care, but should be given due attention due to the high prevalence of depression in these populations. Public health agencies could organize special training events for Health care workers, including Health Extension workers on Mental Health and has to provide screening service to strengthen mental health in the pregnant and postpartum family. Background The perinatal period typically refers to the time from conception to the end of the first postpartum year [1]. This is a time when the woman's life is associated with profound physical and emotional changes, and associated risks for the onset or exacerbation of several mental disorders [2]. One of the most common mental health problems occurring in women during their childbearing years is depression. Perinatal depression refers to major and minor depressive episodes that occur either during pregnancy or after delivery [1]. Perinatal depression is common in developing countries [3] and one in three women has a significant mental health problem [4]. This problem is a serious but under-recognized public health problem in low and middle income countries making a substantial contribution to maternal and infant morbidity and mortality. About 12.5 -42% of pregnant women and, 12 -50% of mothers of newborns in low and middle income countries such as Ethiopia screen were positive for symptoms of depression [5]. There is evidence indicating that maternal common mental disorders, in particular depressive disorders poses a serious public health concern because of their adverse effect on infant development [6] like poor nutrition, stunting, early cessation of breastfeeding, diarrhoeal disease [7]. Pregnant women or mothers with mental health problems often have poor physical health and also have persistent high-risk behaviours including alcohol and substance abuse. They have increased risk of obstetric complications and preterm labour because they are less likely to seek and receive antenatal or postnatal care or adhere to prescribed health regimens [8]. It was also reported that in sub Saharan Africa little progress has been made towards achieving the Millennium Development Goals, In particular little improvement is reflected with rates of maternal mortality and child health which is also true in Ethiopia [9]. A community based study conducted among mothers revealed that the overall prevalence of maternal depression was about 33% in Ethiopia [10]; however, this study did not included pregnant women rather women having a Childs of age 6-18months. So studying the mental depression during perinatal period and its associated factors are very important to provide crucial information's for different stake holders improve maternal mental health services and tackle its grave consequences on child growth and development who are the future of our country and also incorporation of mental health services in maternal health services which in turn reduce maternal mortality rate and its severe consequences. This study was aimed to determine prevalence's of perinatal depression and associated factors among reproductive age group women at Goba and Robe Town of Bale zone, Oromia Region, South East Ethiopia. Study area and period Bale is one of the zones in the Oromia region of Ethiopia. The zone has three administrative towns which are Goba town, Robe town and Ginner town. The study area has two hospitals; health centers and health posts. The study period was from March to April 2014. Study design Community based Cross-sectional survey was employed. Source population All pregnant women and women with child under one year of age (at postpartum period) in the selected kebele of Goba and Robe town. Study population Pregnant women and women with Under 1 yr of child from the source population. Sample size determinations Sample size was determined based on a single proportion formula considering 33% prevalence of maternal mental disorder in Ethiopia [10], 95% confidence level and 5% marginal error. The final sample size by including 5% non-response rate was become 357. Sampling procedures First three kebeles [East Goba from Goba town, Baha Biftu from Robe town and west Goba] were selected from both towns randomly. Then households with pregnant women and having less than one year in these kebele were identified and sampling frame was prepared. Finally Simple Random sampling was used and eligible subjects interviewed. In cases of more than one eligible subject in selected household one of them was selected by lottery methods. Study variables Independent variables included in this study were Maternal Socio economic-demographic variables, Maternal and family history of mental illness, Maternal current alcohol consumption and Tobacco uses, Support during pregnancy and child birth, Families visit during pregnancy and child birth, Support during perinatal period from families/relatives, Age at marriage, plan to have child, Number of total & female children, Husband educational level, employment, husband uses of tobacco and alcohol, Husband history of chronic medical diseases/Hypertension, diabetes Mellitus and any others) and Previously experienced death of one of own children and the Dependent Variable was a perinatal depression. Data collection tool and data collection procedures The WHO, Self Reported Questionnaire with 20 items was used to assess the perinatal depression. The tool consists of twenty Yes/No questions with a reference period of the previous thirty days. It has acceptable levels of reliability and validity in developing countries and is recommended by the World Health Organization as a screening tool for depression [11]. A semi-structured, pre-tested and interviewer administered questionnaire in Amharic language was used to collect data. The tool also includes maternal and husband socio demographic variables, maternal history of pervious mental illness, family history of mental diseases, and the presence of social support for during the perinatal period, Husband and maternal Tobacco use and alcohol uses in last 12 months, and Obstetric history like Age at marriage, Numbers of total children, plan to have child, experiences of the deaths of one of own child. A study conducted in ethiopia validated SRQ-20 for use in a mixed sample of pregnant and postnatal women in the Butajira population and concluded that the overall performance of the SRQ did not differ significantly between pregnant and postnatal women and the study concluded 6 and above as cut off point to differentiate the cases from non cases [12]. So for this study a cutoff point six and over was used to differentiate cases and non cases of perinatal depression. Data were collected through face to face interviews by 10 urban Health extension workers for the duration of 15 days after getting Intensive training on sampling procedures and how to approach and conduct the interview with the study subjects. Data analysis Data analysis was conducted using the Social Sciences (SPSS) Version 19.0 for windows. Descriptive analysis was first done on the total sample. Perinatal women were then classified as having depression based on a score greater than or equal to six on the SRQ. Bivariate logistic regression was used to identify associated independent variables to cases of depression. Then, Variables significantly associated on Binary Logistic regression (P ≤ 0.05) were included in Multiple Logistic regression to identify independent predictors of perinatal depression. Statistical significance was declared at P ≤ 0.05. Data quality assurance Pretest was performed on the subjects other than study subjects to check some ambiguous questions and unclear question and necessary amendment was made. Intensive training was given to data collectors regarding the interview technique and subject selection for interview. The collected data were reviewed and checked for completeness on the day of each data collection by assigning supervisors. Ethical consideration Ethical clearance was obtained from the Madawalabu University Research and community service directorate. The zonal health office was asked for permission to conduct the study. Then after the permission letter was given to respective town health offices. After permission obtained from town health offices the actual study was commenced. Informed consent was obtained from each study subjects after clarifying the about the purpose of the study & other relevant information about the study. Privacy and confidentiality were strictly maintained. Results From the total 357 perinatal women, 340 of them were included in the study yielding the response rate of 95.2%. The mean age of the study subjects was 22.9(SD ± 2.1). Of the total respondents 258(75.9%) were literate in educational status, 190(55.9%) were Christian in religion, 180(52.9%) were at 25-34 age group (Table 1). Three hundred twenty two (94.7%) of Respondents reported previous history of mental illness and 323(95%) were had family history of mental illness. Nearly six percent of respondents were current smokers and 86(25.3%) were current alcohol consumers (Table 2). Husband or Marital factors About 278(87.1%) were reported that their husband were literate in educational status. Two hundred seventy seven (71.2%) of the respondents reported that their husband provide any support during their pregnancy and after birth too while the rest not. Husband psychological abuse and physical abuse were also asked and about 57(17.9%), 79(24.8%) were reported the abuse respectively. About 137(42.9%) of study subjects reported their husband used substances. About 195(59.3%) were reported support from family/relatives and about 213(64.3%) reported visit from their families/ relatives during pregnancy and after births at least two and more times (Table 3). Obstetric factors The mean age at marriage was 21.5 ±SD 3.5. Of the total respondents involved in this study, about 45(13.3%) were had no children, 150(44.2%) were had two to four children's and 38(11.2%) were had above four children. Concerning their pregnancies status about 120(35.6%) were reported it was not planned and the rest otherwise. About 34(12%) were reported they had experienced death of own child ( Table 4). Prevalence of perinatal depression One Hundred Seven (31.5%) were classified as having perinatal depression and about 233(68.5%) were classified as not having perinatal depression. Factors associated with perinatal depression among the study subjects The women who had no perceived difficulty of child care and No husband tobacco users were less likely to have perinatal depression compared to their counterparts, (COR = 0.32, 95%CI (0.19-0.51), (COR = 0.32, 95%CI (0.18-0.57) respectively. The odds of having depression among women who had no family/relative support, no family visit and un-planned pregnancy were more likely compared to their counterparts (Table 5). Result from multivariate analysis revealed that Maternal perceived difficulty of child care, Family History of mental illness, family visit during the perinatal period, history of child death and husband smoking status were found as independent predictors of perinatal depression. The odds of having depression among women/mothers who hadn't experienced adverse life event (death of own child) were less likely compared to their counterparts (AOR = 0.30, 95%CI (0.11-0.86) ( Table 6). The odds of having depression among women/ mothers who hadn't supported from families were more likely compared to their counterparts (AOR = 3.25, 95%CI (1.11-9.52). Discussion Although pregnancy and childbirth are generally viewed as a joyful time to most families, they also put women at risk of developing mental problems-perinatal depression due to different factors. The prevalence of perinatal depression varies across the region in the world. These differences might be due to differences in the type of instrument and cutoff score used, cultural variables, differences in perception of mental health, differences in socioeconomic environments, levels of social support or its perception, as well as biological vulnerability factors [1]. The perinatal period is also a high risk time for the emergence of depressive symptoms [13] and these depressive symptoms can be efficiently measured using screening questionnaires which can be used as proxy measures of depressive disorder after validation against Gold standard tools. For this particular study, WHO Self Reporting Questionnaire, a 20-item questionnaire which is validated for use developing countries such as, Ethiopia. Recent studies on low and middle income countries reported that on the prevalence of maternal mental problem ranges from 10-41%, depending on the place and time of the perinatal period studied and the instruments employed [5]. This study also found that the prevalence of perinatal depression was about 31.5%) which is in a specified range of the problem in low and middle income countries. A community based study conducted among mothers using SRQ instrument revealed that the overall prevalence of maternal depression was about 30% in India and 30% in Peru with cut off point seven and above [10]. This study also found comparable finding but higher than the prevalence of depression in Indonesia which was about 22% [14]. However the later study was conducted using EPDS with cut off point of 12 and above as having postnatal depression. A research conducted by Rochat et al. among pregnant women of South Africa using the EPDS found that the prevalence of depression was about 41% which was higher than the finding of this study. The possible reason may be the difference in the lifestyle of the study participants and the instrument used [15]. Another study conducted in Ethiopia revealed that the prevalence of perinatal depression was about 33%. This finding is comparable with the finding from this study. It may be due to that the lifestyle, culture and the health systems of the country similar [10]. Depression has some identifiable risk factors. Depression may result from poor environmental factors, such as lack of food, inadequate housing, little financial support, and insufficient family support (e.g., an uninvolved husband or partner) [16]. Social support was demonstrated to be important in the transition to motherhood and has an impact on emotional coping [17]. It gives direct effects on emotional stability, attenuated effects of stressful life events, and prevents depression [18]. A Study conducted among Thai women revealed that Lack of social support as an independent predictor for postpartum depression [19]. Another study conducted in Indonesia also showed that lack of support from husband was associated with maternal perinatal depression during pregnancy [20]. The current study also revealed that not having support from families or relatives during perinatal period was found as an independent predictor of perinatal depression. This is due to that, not having the social support makes them vulnerable to stress, worthlessness, and hopelessness [21]. Different studies found that history of pregnancy loss, stressful life events (death of a child) and financial difficulties were associated with maternal depression [22,23]. The current study also found that perceived difficulty of child care and experience of child death was significant predictors of perinatal depression. Most women receive some form of prenatal care, making several visits to see perinatal professionals during the course of pregnancy and after childbirth. These visits present a unique opportunity for health care professionals for an early identification of maternal depression. This finding also helps policy makers to build a comprehensive network of community perinatal services and service providers to strengthen mental health of the pregnant and postpartum family. This social network might provide a range of supports such as parent education and child development issues, postpartum home visits, nutritional counselling, education on Mental Health problems and its sign and symptoms to seek health care for mothers and other family members. These services can be provided by Health Extension workers so that-the risks of maternal health problems and its consequences intervened. The current study tried to determine the perinatal depression and its associated factors that are amenable to change. As the strength, Study used Validated WHO SRQ-20 Questionnaire in Ethiopian context and Data were collected by trained urban Health extension workers. The limitations were-the study not addressed biological factors-hormonal and chemical changes, experienced by women during the perinatal period and the HIV/AIDS status of the women. Since it is Cross sectional study design, cause-effect relationship cannot be determined and longitudinal prospective research is needed to fully understand the nature of these factors in perinatal depression. Conclusion This study found that 1 in 3 women in this region of Ethiopia have depression. Depression screening is not currently routine care, but should be given due attention due to the high prevalence of depression in these populations. Public health agencies could organize special training events for practitioners and support staff within the maternal and child health profession. Health Extension workers have to be trained on Mental Health problem detection/screen/ and has to provide service to strengthen mental health of the pregnant and postpartum family.
2017-07-11T08:15:43.294Z
2015-05-14T00:00:00.000
{ "year": 2015, "sha1": "7f9640618b336acdc98ed4235cd25e01a39b7465", "oa_license": "CCBY", "oa_url": "https://mhnpjournal.biomedcentral.com/track/pdf/10.1186/s40748-015-0013-6", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7f9640618b336acdc98ed4235cd25e01a39b7465", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
7107450
pes2o/s2orc
v3-fos-license
Safety and immunogenicity of an inactivated whole cell tuberculosis vaccine booster in adults primed with BCG: A randomized, controlled trial of DAR-901 Background Development of a tuberculosis vaccine to boost BCG is a major international health priority. SRL172, an inactivated whole cell booster derived from a non-tuberculous mycobacterium, is the only new vaccine against tuberculosis to have demonstrated efficacy in a Phase 3 trial. In the present study we sought to determine if a three-dose series of DAR-901 manufactured from the SRL172 master cell bank by a new, scalable method was safe and immunogenic. Methods We performed a single site, randomized, double-blind, controlled, Phase 1 dose escalation trial of DAR-901 at Dartmouth-Hitchcock Medical Center in the United States. Healthy adult subjects age 18–65 with prior BCG immunization and a negative interferon-gamma release assay (IGRA) were enrolled in cohorts of 16 subjects and randomized to three injections of DAR-901 (n = 10 per cohort), or saline placebo (n = 3 per cohort), or two injections of saline followed by an injection of BCG (n = 3 per cohort; 1–8 x 106 CFU). Three successive cohorts were enrolled representing DAR-901 at 0.1, 0.3, and 1 mg per dose. Randomization was performed centrally and treatments were masked from staff and volunteers. Subsequent open label cohorts of HIV-negative/IGRA-positive subjects (n = 5) and HIV-positive subjects (n = 6) received three doses of 1 mg DAR-901. All subjects received three immunizations at 0, 2 and 4 months administered as 0.1 mL injections over the deltoid muscle alternating between right and left arms. The primary outcomes were safety and immunogenicity. Subjects were followed for 6 months after dose 3 for safety and had phlebotomy performed for safety studies and immune assays before and after each injection. Immune assays using peripheral blood mononuclear cells included cell-mediated IFN-γ responses to DAR-901 lysate and to Mycobacterium tuberculosis (MTB) lysate; serum antibody to M. tuberculosis lipoarabinomannan was assayed by ELISA. Results DAR-901 had an acceptable safety profile and was well-tolerated at all dose levels in all treated subjects. No serious adverse events were reported. Median (range) 7-day erythema and induration at the injection site for 1 mg DAR-901 were 10 (4–20) mm and 10 (4–16) mm, respectively, and for BCG, 30 (10–107) mm and 38 (15–55) mm, respectively. Three mild AEs, all headaches, were considered possibly related to DAR-901. No laboratory or vital signs abnormalities were related to immunization. Compared to pre-vaccination responses, three 1 mg doses of DAR-901 induced statistically significant increases in IFN-γ response to DAR-901 lysate and MTB lysate, and in antibody responses to M. tuberculosis lipoarabinomannan. Ten subjects who received 1 mg DAR-901 remained IFN-γ release assay (IGRA) negative after three doses of vaccine. Conclusions A three-injection series of DAR-901 was well-tolerated, had an acceptable safety profile, and induced cellular and humoral immune responses to mycobacterial antigens. DAR-901 is advancing to efficacy trials. Trial registration ClinicalTrials.gov NCT02063555 Methods We performed a single site, randomized, double-blind, controlled, Phase 1 dose escalation trial of DAR-901 at Dartmouth-Hitchcock Medical Center in the United States. Healthy adult subjects age 18-65 with prior BCG immunization and a negative interferon-gamma release assay (IGRA) were enrolled in cohorts of 16 subjects and randomized to three injections of DAR-901 (n = 10 per cohort), or saline placebo (n = 3 per cohort), or two injections of saline followed by an injection of BCG (n = 3 per cohort; 1-8 x 10 6 CFU). Three successive cohorts were enrolled representing DAR-901 at 0.1, 0.3, and 1 mg per dose. Randomization was performed centrally and treatments were masked from staff and volunteers. Subsequent open label cohorts of HIV-negative/IGRA-positive subjects (n = 5) and HIV-positive subjects (n = 6) received three doses of 1 mg DAR-901. All subjects received three immunizations at 0, 2 and 4 months administered as 0.1 mL injections over the deltoid muscle alternating between right and left arms. The primary outcomes were safety and immunogenicity. Subjects were followed for 6 months after dose 3 for safety and had phlebotomy performed for PLOS Introduction Elimination of tuberculosis by 2035 is a major global health priority. This goal cannot be achieved with existing approaches to treatment and prevention [1]. Among newer prevention strategies in development, an improved vaccine strategy against tuberculosis is one of the most promising. Both improved priming vaccines and new booster vaccines are in development; however, modelling indicates that an adolescent and adult booster would have a greater impact on the epidemic over the initial several decades [2][3][4]. Development of new vaccines against tuberculosis and selection for advancement to human trials has been based largely on molecular discovery and animal challenge models. Candidates include attenuated live vaccines, subunit vaccines and inactivated vaccines [5]. Since no existing animal challenge model predicts vaccine protection in humans, we chose a vaccine candidate based on available clinical observations. Epidemiologic studies indicate that prior infection with either non-tuberculous mycobacteria or Mycobacterium tuberculosis itself provides protection against subsequent exposure to tuberculosis [6][7][8]. Prior tuberculosis vaccine trials have shown that whole cell live bacille Calmette-Guerin (BCG), live M. microti, and inactivated M. bovis have each demonstrated efficacy in preventing tuberculosis in humans [9][10][11]. We therefore hypothesized that human genetic and pathogen antigenic diversity required whole-cell, polyantigenic challenge for protection against tuberculosis [12]. The Dartmouth group began studies with SRL172, an inactivated whole cell BCG booster derived from a heatinactivated non-tuberculous mycobacterium deposited at the National Collection of Type Cultures (NCTC, London, UK) under accession number 11659. Although originally identified at Mycobacterium vaccae by phenotypic methods 16S rRNA gene sequencing of the SRL172 seed strain demonstrates >99.6% homology to the reference 16S rRNA sequence for Mycobacterium obuense. After demonstrating the safety and immunogenicity of a multiple-injection series of SRL172 in Phase 1 and 2 trials we conducted a seven-year, 2,013-subject randomized, controlled Phase 3 trial in Tanzania showing that a five-injection series had acceptable safety, was immunogenic, and reduced culture-confirmed tuberculosis by 39% in HIV-infected persons [13][14][15][16]. The agar-based manufacturing method for SRL-172 was not scalable. We have now developed a new, scalable broth-based manufacturing method using the original master cell bank for SRL172 to produce the inactivated DAR-901 booster vaccine. In the present randomized, controlled dose-escalation trial we sought to evaluate the safety, tolerability and immunogenicity of a three-injection series of up to 1 mg DAR-901 to inform the decision to advance DAR-901 to efficacy trials. Study design and participants We recruited and screened healthy adult subjects aged 18-65 with a history of prior BCG immunization for the trial. We obtained written informed consent from all subjects. Subjects for the randomized double-blind, dose-escalation cohorts (A1, A2 and A3 in Table 1) were required to have a negative HIV ELISA (Vitros Anti-HIV 1 + 2 test, Ortho Clinical Diagnostics, Rochester, NY), a negative interferon gamma release assay (IGRA, T SPOT.TB, Oxford Immunotec), a normal physical examination, acceptable safety laboratory results, be negative for hepatitis B surface antigen and hepatitis C antibody, and be free of chronic illness. The target sample size for cohorts A1-3 was 48 (Fig 1). We recruited IGRA-positive and HIV-positive subjects for the open-label cohorts (A4, B1 and B2) ( Table 1). The target sample size for cohorts A4, B1 and B2 was 14-22 (Fig 1). HIV-infected subjects were recruited from the HIV Care Program of the Dartmouth-Hitchcock Medical Center (DHMC). These subjects were required to have at least one previous positive HIV viral load, to have been on stable Study procedures including randomization and masking The first subject in cohorts A1, A2, and A3 received open-label DAR-901. When acceptable safety was confirmed 3 days after immunization, the remaining subjects in each cohort were randomized 3:1:1 to receive three injections of DAR-901 (7 x 10 6 CFU/ for 1 mg; 2 x 10 6 CFU for 0.3 mg; and 0.7 x 10 6 CFU for 0.1 mg), three of saline placebo, or two of saline followed by BCG (1-8 x 10 6 CFU). Computer-generated randomization was performed centrally and provided to the study pharmacist who filled a tuberculin syringe to 0.1 mL with the agent specified. The pre-filled syringe was given to an injection nurse who administered the intradermal injection but was not involved in any subsequent evaluations. Separate study nurses and study physicians conducted all subject assessments to ensure that blinding to treatment allocation was maintained throughout the trial. A three-person expert Dose Review Committee approved escalation to the next doses level after review of 7-day safety data on all subjects in the previous cohort. We administered the first dose of vaccine or placebo within 28 days of screening; subsequent doses were administered at 2 and 4 months (Fig 2). All doses were administered at Dartmouth-Hitchcock Medical Center as intradermal injections over the deltoid muscle, alternating between the left and right arms. Safety assessments We repeated physical examination, vital signs and safety laboratory tests before each dose of vaccine and at 28 days and 6 months after dose 3 (End of Study, EOS, 180 days). Vital signs were repeated on all subjects 30-60 minutes after each dose. Subjects were seen 7 days after each dose of vaccine for physical examination, vital signs and examination of the injection site (Fig 2). Injection site reactions were measured as mm in the transverse diameter. In addition, after each dose of vaccine, subjects were contacted at days 3, 5 and 7 after injection and twice weekly for three additional weeks to report daily self-measured temperature, self-measured vaccine site reactions, and were questioned regarding local and systemic adverse events (AEs). Safety laboratory studies included complete blood count, serum creatinine, glucose, liver function tests, CPK and urinalysis. HIV-positive subjects in cohorts B1 and B2 had HIV viral load determinations at the same intervals (COBAS 1 TaqMan 1 HIV-1 Test, Roche Molecular Diagnostics, Basel, Switzerland, detection limit = 20 HIV RNA copies/mL). We graded injection site reactions, abnormal laboratory values, and all other adverse events based on guidelines for vaccine trials from the United States Food and Drug Administration [17], which are appreciably stricter than the scales typically used in clinical trials of therapeutics. Immune assays We collected blood for immune assays from subjects in cohorts A1, A2, and A3 at baseline (pre-dose 1), pre-dose 2, pre-dose 3, and at 7, 28, 56 and 180 days after dose 3 (Fig 2). For subjects in cohorts A4, B1 and B2 we collected samples at baseline and 56 days post dose 3. We repeated IGRA assays on subjects in Cohort A3 at 2-6 months after dose 3. We isolated peripheral blood mononuclear cells (PBMC) by Ficoll-Hypaque density gradient separation. Cells were cryopreserved and, after thawing, cultured in a 96-well tissue culture plates for 18-24 hrs in RPMI supplemented with 10% fetal bovine serum (Mediatech) with equal volumes of medium (negative control) or assay antigens. Triplicate samples were pooled and the concentration of IFN-γ assessed by ELISA (Affymetrix, Santa Clara, CA, USA). Antigens included phytohemaglutinin at 5 mcg/ml (positive control; Sigma), M. tuberculosis whole cell lysate (WCL) at 2 mcg/ml, or DAR-901 lysate at 1 mcg/ml. Plates for IFN-γ were read on a micro plate reader (BioTek Synergy2, Winooski, VT, USA). Standard control curves used recombinant protein human IFN-γ. Anti-LAM antibody concentrations were assessed in singlet serum samples by a proprietary ELISA [14] and read on a micro plate reader (BioTek Synergy2, Winooski, VT, USA). A separate report will detail results of multiparameter intracellular cytokine stimulation assays performed using separate aliquots of PBMCs. Outcomes and statistical analysis The objective of the trial was to determine a dose of DAR-901 for further clinical trials that had an acceptable safety profile and induced both humoral and cellular immune responses to mycobacterial antigens. The sample size for the present study was based on prior experience with Phase 1 vaccine studies and represents the number of participants needed to permit preliminary evaluation of safety and tolerability and support advancement of a vaccine development program. We performed the safety analysis on all enrolled subjects who received at least one injection of study treatment. Solicited and un-solicited AEs were classified by the Medical Dictionary for Regulatory Activities (MeDRA)-preferred term and compared between treatment groups. Injection site reactions were deemed related to immunization. The Principal Investigator assessed other adverse events for their relationship to immunization. Safety laboratory studies with values outside pre-defined reference ranges were assessed for clinical significance. For immune assays, we compared pre-and post-vaccination responses using Wilcoxon signed rank tests and, secondarily, vaccine vs placebo and vaccine vs BCG responses using the Mann-Whitney U test. GraphPad Prism software was used for the statistical analyses. For all analyses, we assigned a P-value <0.05 as the cutoff for statistical significance. The trial is registered with ClinicalTrials.gov as NCT02063555. The study was approved by the Dartmouth Committee for the Protection of Human Subjects. Role of the funding source The Dartmouth and Aeras study teams were involved in the study design, interpretation of data and writing the report. The corresponding author had access to all data and had final responsibility for data analysis and writing the study report. Participants and study treatments We screened 78 individuals to enroll 59 subjects. For IGRA-negative cohorts A1-A3 a total of 66 subjects were screened to obtain 49 eligible subjects who were randomized; one subject withdrew prior to immunization and was replaced, leaving 48 subjects in the A cohorts. The 18 subjects ineligible for cohorts A1-A3 included 9 who were IGRA-positive, 4 with abnormal laboratory results, 1 who was unable to return for follow-up, 1 without a BCG scar, 1 on systemic steroids, and 1 who was unable to tolerate phlebotomy (41). For IGRA-positive cohort A4 we screened a total of 5 subjects and all were eligible. For HIV-positive cohorts B1 and B2 we screened a total of 7 subjects to obtain 6 eligible subjects. One was ineligible due to a prior diagnosis of carcinoma (Fig 1). Characteristics of study subjects are shown in Table 2. In cohorts A1-A3 a total of 47 of 48 subjects received all three study injections. Study investigators withdrew one subject (DAR-901, 1mg) after dose 2 due to hematuria that was subsequently attributed to a new diagnosis of schistosomiasis. All 48 subjects in cohorts A1-A3 completed follow-up through 6 months after immunization. In cohorts A4, B1 and B2 all 11 subjects received all three study injections. One subject withdrew 28 days after dose 3 citing inadequate time to complete study visits. The remaining 10 of 11 subjects completed follow-up through 6 months of immunization. The first subject was enrolled on April 28, 2014 and the last study visit conducted on February 19, 2016. Injection site reactions Erythema and induration at the injection site were common in DAR-901 recipients. All reactions were mild and none met FDA criteria for Grade 1 or higher. Tables 3 and 4 (Table 5). All local reactions healed spontaneously; three of 10 subjects at the 1 mg dose in A3 had visible erythema or scar at the dose 3 site at the end-of-study (EOS) visit (range 5-6 mm). summarize For 6 HIV-positive subjects in cohorts B1 and B2 median erythema at Day 7 was 12 mm (range 7-12) and median induration 8 mm (range 7-12). No subjects had pustules, crusts or desquamation at the injection site. For 9 HIV-negative subjects who received BCG for dose 3 median injection site reactions at 7 days were 30 mm for erythema and 38 mm for induration. By the 28 day visit most subjects had skin breakdown with drainage or ulceration (data not shown). Solicited symptoms at 7 days for 53 HIV-negative subjects are shown in Table 5. Among the 6 HIV-positive subjects, one reported tenderness, and one pruritus at the vaccine site. Adverse events and laboratory results There were no serious adverse events (AEs). AEs excluding injection site reactions are shown in Table 6. There was no difference in the distribution of AEs between DAR-901 recipients and saline placebo or BCG cohorts. Within the DAR-901 cohorts there were no patterns of reaction within organ systems to suggest an effect of the vaccine. One HIV-positive subject with treated hypertension had a Grade 3 increase in blood pressure one hour after phlebotomy and dose 2 injection (from 140/90 to 160/100) on a day when she reported she had failed to take her anti-hypertensive medications. No other Grade 3 changes in vital signs were noted. Mild CPK elevations were common at baseline in physically active subjects (data not shown). Treatment-emergent CPK elevations Grade 1 or higher were noted in 25 of 59 (42%) subjects during the study including in 14 of 41 (34%) DAR-901 recipients, 6 of 9 (67%) BCG recipients and 4 of 9 (44%) placebo recipients. Grade 3 CPK elevations (3.1 to 10x ULN) were noted in 6 subjects; one Grade 4 reaction occurred in a placebo recipient with a clinical diagnosis of influenza. Treatment was not required for any CPK abnormality. There was one Grade 3 episode of hyperglycemia (random, >200 mg/dL) 2 months after dose 2 in a placebo recipient with a medical history of mild glucose intolerance; dietary treatment was continued. One HIV-positive subject who received 1.0 mg DAR-901 and had a prior history of fluctuating hemoglobin levels had a Grade 3 hemoglobin decrease pre-dose 3 which resolved spontaneously to pre-study levels 28 days later. No other Grade 3 laboratory abnormalities were noted. HIV viral loads by PCR were <20 copies/mL on all 6 HIV positive subjects at screening. Subjects had 4 additional viral loads determined during the study (pre-1, pre-2, pre-3 and 28 days after dose 3). The value for one subject was 70 copies/mL on Day 1 prior to dose 1, but <20 copies/mL on 3 subsequent determinations. One subject (772) had a viral load of 430 copies/mL pre-dose 3 which was suspected by her primary caregiver to be related to a lapse in taking her anti-retroviral therapy. After counseling her repeat viral load 28 days after dose 3 was <20 copies/mL. All other viral loads were <20 copies/mL on all subjects at all additional time points. Collectively these data confirm that a three dose series of 1 mg DAR-901 does not have an adverse effect on HIV viral load. There were no statistically significant differences at any study visit in absolute or ordinal IFN-γ responses to DAR-901 lysate or anti-LAM responses between vaccine and placebo subjects (n = 3 per cohort), nor between HIV-negative and HIV-positive study subjects or between IGRA-negative and IGRA-positive subjects. There were no differences in IFN-γ responses to DAR-901 lysate between the 10 subjects in the 1 mg A3 cohort and the 9 subjects in the BCG cohort. Median IFN-γ responses to MTB lysate were significantly greater among 9 BCG subjects compared to 10 subjects in the 1 mg DAR-901 A3 cohort pre-dose 1 (284 vs. 40 pg/ml, p = 0.028), pre-dose 2 (304 vs 19 pg/ml, p = 0.003), and post-dose 3 at day 28 (2,624 vs. 125 pg/ml, p = 0.005), day 56 (922 vs. 79 pg/ml, p = 0.002) and 6 month EOS (862 vs. 89 pg/ml, p = 0.004). See also S2 Fig. There were no differences in the levels of anti-LAM antibody response at any visit between the 9 BCG subjects or the 10 subjects in the 1 mg A3 DAR-901 cohort. Repeat IGRA assays 2-6 months after dose 3 on 10 baseline IGRA-negative subjects who received 1 mg DAR-901 were all negative. The subject with schistosomiasis received two doses of DAR-901: Dose 2 was received on January 2, 2015; the initial repeat IGRA on July 10, 2015 was positive at 0.70; repeat on August 14, 2015, was negative at 0.00. Discussion Development of DAR-901 has been based on clinical and epidemiologic studies in humans. Animal models remain an imperfect simulation of the complex natural history of M. tuberculosis infection and vaccine-induced protection in humans. Nor is there a validated in vitro correlate of tuberculosis vaccine efficacy to justify the use of immune assays in selecting optimal vaccine constructs. Epidemiologic skin test studies demonstrate that prior infection with either M. tuberculosis or non-tuberculous mycobacteria confers protection against disease from subsequent tuberculosis exposure [6][7][8]. Prior vaccine trials indicate that live BCG, live M. microti, inactivated M bovis and inactivated non-tuberculous mycobacterial reagents all confer protection against tuberculosis in humans [9][10][11]. Although variations in the efficacy of BCG have been observed, efficacy has been high in all studies where mycobacteria-naïve infants were immunized at birth [18]. In all of these studies immune protection against tuberculosis has only been observed in response to polyantigenic, whole-organism exposure or infection. Of note, such protection is not species-specific within the genus. A whole cell vaccine derived from a non-tuberculous mycobacterium was therefore selected to simulate the whole organism exposures known to confer immune protection against tuberculosis in humans. We have shown previously that immunization with agar-produced SRL172, an inactivated whole cell non-tuberculous mycobacterial vaccine has an acceptable safety profile and is well tolerated. In a Phase 3 trial in HIV-infected patients in Tanzania SRL172 was both immunogenic and effective in preventing culture-confirmed tuberculosis [13]. That trial was designed with 5 doses before we had immune response data from a Phase 2 trial in Finland which demonstrated significant immune responses after 3 doses [15]. DAR-901 is the scalable, broth-produced formulation made from the SRL172 Master Cell Bank. In the present Phase 1 study we demonstrate that 3 doses of DAR-901 have an acceptable safety profile, are well tolerated as a BCG booster, and produce injection site reactions comparable to those observed with SRL172. In addition DAR-901 induces both cellular and humoral responses to polyantigenic mycobacterial lysates as observed previously with agar-manufactured SRL172 [14]. This is the only inactivated whole cell investigational vaccine against tuberculosis to have shown efficacy in humans. The subjects in this Phase 1 trial represent a spectrum of subjects who will be candidates for DAR-901 boosting in tuberculosis-endemic countries including HIV-negative, HIV-positive, IGRA-negative and IGRA-positive persons. Extensive safety monitoring, including day-7 visits after each dose, twice weekly phone contact for 28 days after each dose, and a final visit 6 months after the last dose, indicate that DAR-901 has an acceptable safety profile and is welltolerated. Other than injection site reactions the only adverse event judged possibly related to DAR-901 was mild headache observed in 3 of 21 subjects who received the 1 mg dose. There were no clinically significant treatment-emergent vital signs or laboratory abnormalities related to DAR-901. Measured reactions at the injection site were typically limited to mild erythema and induration and were not associated with significant discomfort. Superficial desquamation or erosion was noted in a minority of subjects either at 7 days or later over the course of the study. These reactions were comparable to those observed with a 1 mg dose of SRL172 in the Phase 3 study [13]. In contrast BCG was associated with drainage or skin breakdown in most subjects 1-2 months after immunization. The immune assays in the present study showed that 1 mg DAR-901 induced IFN-γ responses to the vaccine lysate and to M. tuberculosis lysate compared to baseline. Mean OD values for antibody to LAM increased significantly prior to dose 3, comparable to the responses seen after 5 doses of SRL172 [14]. IFN-γ responses to M. tuberculosis lysate and antibody responses to LAM remained statistically significant 6 months after immunization. In this Phase 1 study with a small sample size we did not demonstrate statistically significant differences in immune response between the 3 placebo recipients and 10 vaccine recipients in any of the three blinded dose cohorts. In order to avoid type II errors we did not adjust our threshold for statistical significance to correct for multiple comparisons in our analyses of immune responses. IFN-γ responses to MTB lysate among BCG recipients in this trial were substantially higher than those observed after three doses of DAR-901. However, SRL172 has been shown effective as a booster while two large randomized trials have shown that BCG boosters are not effective in the prevention of tuberculosis [19,20]. Indeed, in the Malawi booster trial there was a trend toward a higher rate of tuberculosis among BCG recipients than control recipients [20]. This raises the interesting question of whether BCG recipients might have had excessive IFN-γ responses, which have been shown to be detrimental in immune control of tuberculosis in in experimental studies [21]. The target product profile for DAR-901 is a booster vaccine for BCG-primed adolescents and adults living in tuberculosis-endemic countries. SRL172 is the only new tuberculosis vaccine shown effective in humans, but the agar-based production method was not suitable for mass distribution. DAR-901 is produced from the same seed strain as SRL172, but manufactured by a scalable, broth-grown procedure. In the present trial we have shown that DAR-901 has an acceptable safety profile, is well-tolerated in a spectrum of BCG-primed adults representing groups who will be candidates for boosting in tuberculosis-endemic countries and induces significant IFN-γ and antibody responses to polyantigenic mycobacterial antigens. Importantly, DAR-901 did not result in IGRA conversion, indicating that it can be studied in a prevention of infection trial. DAR-901 is now entering a fully powered, Phase 2b randomized, controlled, prevention of infection trial among adolescents in Tanzania. Interferon gamma (IFN-γ) responses to Mycobacterium tuberculosis (MTB) whole cell lysate among 10 subjects who received three injections of 1 mg DAR-901 (Cohort A3) compared to 9 subjects who received BCG 1-8x10 6 organisms in 0.1 mL. Samples for Visit 1, pre-dose 2, and pre-dose 3 were collected 2 months apart and were obtained prior to dose 1, 2 and 3 respectively. Bacille Calmette Guerin (BCG) recipients exhibited greater IFN-γ responses to MTB lysate at multiple timepoints after dose 3.
2018-04-03T03:46:03.094Z
2017-05-12T00:00:00.000
{ "year": 2017, "sha1": "a4721514407feee4a7f8ef7450d5253a17e2f37a", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0175215&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1b6c4b383cc5cfd21f02f5d5fb32b416d8918462", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
229721245
pes2o/s2orc
v3-fos-license
MYC DNA Methylation in Prostate Tumor Tissue is Associated with Gleason Score Increasing evidence suggests a role of epigenetic mechanisms at chromosome 8q24, an important cancer genetic susceptibility region, in prostate cancer. We investigated whether MYC DNA methylation at 8q24 (six CpG sites from exon 3 to the 3′ UTR) in prostate tumor was associated with tumor aggressiveness (based on Gleason score, GS), and we incorporated RNA expression data to investigate the function. We accessed radical prostatectomy tissue for 50 Caucasian and 50 African American prostate cancer patients at the University of Maryland Medical Center, selecting an equal number of GS 6 and GS 7 cases per group. MYC DNA methylation was lower in tumor than paired normal prostate tissue for all six CpG sites (median difference: −14.74 to −0.20 percentage points), and we observed similar results for two nearby sites in The Cancer Genome Atlas (p < 0.0001). We observed significantly lower methylation for more aggressive (GS 7) than less aggressive (GS 6) tumors for three exon 3 sites (for CpG 212 (chr8:128753145), GS 6 median = 89.7%; GS 7 median = 85.8%; p-value = 9.4 × 10−4). MYC DNA methylation was not associated with MYC expression, but was inversely associated with PRNCR1 expression after multiple comparison adjustment (q-value = 0.04). Findings suggest that prostate tumor MYC exon 3 hypomethylation is associated with increased aggressiveness. Introduction Chromosome 8q24 has been established as an important region in the genetic susceptibility to prostate cancer [1][2][3][4][5][6][7][8][9][10][11], particularly among men of African ancestry, where addi-tional susceptibility single nucleotide polymorphisms (SNPs) have been discovered that are monomorphic in other populations [10] and explain a greater proportion of familial risk [12]. The underlying mechanisms of susceptibility due to this region are still poorly understood. The 8q24 locus has been traditionally described as a gene desert, with the nearest prostate cancer susceptibility SNP located about 200 kb upstream of the oncogene MYC. However, this terminology is somewhat misleading, since there are several other genes and non-coding RNAs (ncRNAs) located at 8q24. MYC encodes a transcription factor that regulates genes involved in cell growth, differentiation and apoptosis. MYC is commonly overexpressed in prostate tumor tissue and has long been thought to play a role in prostate cancer, and particularly with regard to prostate cancer progression [13]. Several other genes and ncRNAs at 8q24, such as POU5F1B (previously thought to be a pseudogene), PRNCR1, CASC11 and CCAT2, have also been shown to be overexpressed in prostate cancer [14][15][16][17] and may also play a role in the development or progression of prostate cancer. There is growing evidence for a role of epigenetic mechanisms at chromosome 8q24 in prostate cancer based on the identification of gene regulatory elements at this locus [18], as well as long-range tissue-specific interactions between 8q24 cancer susceptibility loci and MYC [19,20]. Expanding on these findings, Du and colleagues examined physical interactions across the genome for several 8q24 cancer susceptibility regions using a chromosome conformation capture (3C)-based, multi-target sequencing technology in cell lines for a variety of cancers (including prostate) and found frequent interactions with MYC as well as other intra-and inter-chromosomal targets; other common intra-chromosomal targets included PVT1, FAM84B and GSDMC [21]. The authors also observed an enrichment of interactions with genes in important cancer pathways such as Wnt signaling. Another study investigating genome-wide physical interactions using circularized chromosome conformation capture (4C) coupled with next-generation sequencing for an 8q24 enhancer region also identified interactions with MYC, FAM84B and GSDMC, among other genes [22]. These findings suggest that the 8q24 locus may function as a regulatory hub for a variety of gene targets in important cancer pathways [21]. A major component of the epigenetic code is DNA methylation, which involves the addition of a methyl group (typically to a cytosine base located 5 to a guanine, i.e., CpG site) and is thought to influence carcinogenesis by affecting gene expression or genetic stability [23]. Previous studies have demonstrated the importance of DNA methylation alterations in prostate cancer, both early in prostate cancer development and also with regard to prostate cancer progression [24][25][26][27]. In previous work from our group, we reported an increased risk of aggressive prostate cancer associated with higher peripheral blood DNA methylation in MYC exon 3 in a large prospective study using pre-diagnostic blood samples, which persisted after adjustment for established 8q24 prostate cancer susceptibility loci [28]. In the present study, our primary aim was to investigate whether MYC exon 3 DNA methylation in prostate tumor tissue was associated with tumor aggressiveness based on Gleason score (GS), an intermediate prognostic marker for prostate cancer [29]. Exon 3 in MYC is highly conserved across species and, interestingly, hypomethylation at MYC exon 3 was observed in human myeloma cell lines compared to normal lymphocytes [30] and correlated with progression from normal tissue to metastatic disease in colorectal tissue [31]. As a secondary aim, we sought to explore the potential downstream consequences of prostate tumor MYC DNA methylation by evaluating associations with RNA expression for MYC and other nearby genes and ncRNAs in prostate tumor tissue. As another secondary aim, we investigated whether the association between MYC DNA methylation and GS varied by race (African American vs. Caucasian) given striking disparities in prostate cancer outcomes by race [32] and growing evidence for a role of biological differences in tumor [33][34][35]. Our findings indicated lower MYC DNA methylation in prostate tumor compared to paired normal prostate tissue samples for all CpG sites evaluated, suggesting that these may represent somatic alterations in prostate tissue. In addition, we observed significantly lower MYC DNA methylation at three exon 3 CpG sites in more aggressive (GS 7) compared to less aggressive (GS 6) tumors, and we observed a similar pattern for another exon 3 CpG site that approached statistical significance. We observed some evidence of a stronger association between MYC DNA methylation and GS for African American than Caucasian men for one of the CpG sites that showed a significant overall association with GS. Incorporation of RNA expression data indicated an inverse association between prostate tumor MYC DNA methylation and PRNCR1 expression in tumor tissue. Additional research is needed to replicate these findings and further investigate the potential role of MYC DNA methylation in aggressive prostate cancer using experimental laboratory designs. Study Samples We accessed archival formalin-fixed, paraffin-embedded (FFPE) tumor and normal prostate tissue samples from 50 Caucasian and 50 African American prostate cancer patients who underwent radical prostatectomy at the University of Maryland Medical Center. For each race group, we selected 25 men with GS 7 and 25 men with GS 6 disease to allow for separate evaluation of the association between MYC DNA methylation and GS by race; the study workflow is presented in Figure 1. We did not include men with GS 8 or higher due to relatively small numbers available for analysis. We systematically selected the most recent samples available. This study was approved by the IRB at the University of Maryland, Baltimore (HP-00074942, most recent approval date: 7 July 2020). We obtained a waiver of informed consent as part of the approved IRB protocol. The data presented in this paper will be made freely available upon request. We identified the most representative tumor block and the most representative normal block from the available FFPE radical prostatectomy tissue for each patient. Based on an H&E slide, we circled~2 mm tumor regions (≥75% purity) and~2 mm normal regions for nucleic acid extraction, then cut six unstained double-thickness slides (10 µm) per sample, with four used for the DNA methylation assays and two used for the RNA expression assays from the same circled regions. For patients with a GS of 7, we preferentially targeted regions with a Gleason pattern 4 to provide greater contrast with our comparison group (patients with GS of 6, i.e., who typically have Gleason pattern 3 only). We cut additional slides from the tumor and normal samples to serve as replicate samples for quality control for the DNA methylation assays (n = 3). We also cut additional tumor and normal slides to serve as quality control replicates for the RNA expression assays (n = 5). Nucleic Acid Extraction/Preparation and RNA Quality Assessment In preparation for the DNA methylation assays, the University of Maryland FFPE samples were sent to EpigenDx, Inc. (Hopkinton, MA, USA), where direct bisulfite treatment was performed on digested FFPE tissue. Specifically, FFPE samples were deparaffinized with Histochoice clearing reagent and dehydrated with 100% ethanol. Samples were then digested with Proteinase K in a 1X digestion buffer and 20 uL were directly bisulfite treated using the Zymo Research EZ DNA Methylation Direct Bisulfite Kit. For the RNA expression assays, RNA was extracted at the University of Maryland, Baltimore using Qiagen's RNEasy FFPE kit. Quantitative analysis was performed using both a ThermoFisher Nan-oDrop and an Agilent Technology's BioAnalyzer. In addition, we evaluated the quality of the RNA using an RNA quality assessment kit from ThermoFisher. Briefly, this approach involved calculating the difference in quantification cycles (Cq), i.e., delta Cq, between each study sample and high-quality RNA controls (HeLa cells); lower delta Cq values correlate with higher quality RNA (ThermoFisher, Carlsbad, CA, USA). MYC DNA Methylation Assays We conducted targeted pyrosequencing assays on bisulfite-treated DNA from the University of Maryland samples to measure DNA methylation at six CpG sites spanning from MYC exon 3 to the 3 UTR (GRCh37/hg19 coordinates: chr8:128753145-chr8:128753221; the sites were labeled as CpG 212-CpG 217 for ease of referencing) (EpigenDx, Inc., Hopkinton, MA, USA). Specifically, five of the CpG sites (212-216) were located in exon 3, and the remaining CpG site (217) was located in the 3 UTR for MYC. The bisulfite conversion step and pyrosequencing assays have been described previously [28,36]. The target sequences for the specific assays run in this study (ADS3573-FS and ADS3573-FS2) are given in Table A1 in Appendix A. There were 11 patients for whom the assays failed in the present study, and thus our analyses using the DNA methylation data were restricted to the remaining 43 African American and 46 Caucasian men (n = 89). We computed coefficients of variation (CVs) for each CpG site based on the three participants with replicate QC samples (i.e., three pairs of tumor samples and three pairs of normal samples). The average CV across all six CpG sites was 2.7% for the normal tissue pairs and 3.4% for the tumor tissue pairs. RNA Expression Data We obtained RNA expression data from the University of Maryland samples using the Human Clariom D TM array (ThermoFisher, Carlsbad, CA, USA), which covers 138,745 transcript cluster IDs (TCs). We used the UCSC Genome Browser to identify genes and ncRNAs within 1 Mb either upstream or downstream of MYC based on GRCh37/hg19 coordinates for MYC (i.e., chr8:128748315-128753680). We then used the NetAffx TM tool [37] to identify the TCs that corresponded to MYC and the other genes/ncRNAs of interest in our array data. Of the 33 genes/ncRNAs in the region of interest, nine were not covered on the Clariom D array (SRMP1P1, AC020688.1, JX003871, BC106081, DQ515899, AC108714.1, BC042052, HV975509 and AC103705.1). Based on the available genes/ncRNAs, we identified a total of 22 TCs for analysis. We performed SST-RMA normalization and log2 transformation of the expression data using TAC v.4.0 software (ThermoFisher, Carlsbad, CA, USA). Analyses of the RNA expression data by tumor/normal status were conducted in 100 tumor-normal pairs. Analyses of RNA expression by GS were done using 100 tumor samples. Analyses of prostate tumor MYC DNA methylation in relation to RNA expression in prostate tumor tissue were limited to the 89 tumor samples with DNA methylation data. We computed CVs for each TC of interest based on the five participants with replicate QC samples (i.e., five pairs of tumor samples and five pairs of normal samples). The average CV across the 22 TCs was 6.8% for the normal tissue pairs and 8.3% for the tumor tissue pairs. The Cancer Genome Atlas (TCGA) DNA Methylation Data We incorporated DNA methylation array data from the TCGA-PRAD cohort (498 primary prostate tumor samples) to evaluate two additional MYC exon 3 CpG sites nearby (<0.2 kb) the CpG sites evaluated in the University of Maryland samples in relation to tumor/normal status and GS among the tumor samples. We downloaded the TCGA-PRAD Illumina Infinium HumanMethylation450 (HM450) BeadArray level 3 data from https://gdac.broadinstitute.org (data version: 2016_10_28) for all 498 PRAD primary tumor and 50 normal samples. We first extracted the β values of these two CpG probes (cg00163372 and cg08526705) for the 50 tumor-normal paired samples (47 Caucasian and three African American men). In addition, we also extracted the data for all tumor samples with GS 6 (n = 45) and GS 7 (n = 248) for DNA methylation comparisons by GS. We computed the median β values and the interquartile range (IQR) for the tumor and normal tissue samples, and separately by GS for the tumor samples. The β values (ranging from 0 to 1), widely used to measure the methylation level, were converted to percentages to be comparable to the results for the University of Maryland samples. Statistical Analysis Analyses were conducted in SAS Studio 3.8 and R Studio (version 3.6.1). We evaluated Spearman correlations in % DNA methylation for MYC CpG-CpG pairings, separately for tumor and normal samples. We used Wilcoxon signed-rank tests to assess differential DNA methylation and RNA expression between paired tumor and normal prostate tissue samples from the same individual. We used Wilcoxon rank sum tests to compare DNA methylation and RNA expression between the GS 6 and GS 7 groups using tumor tissue. We used logistic regression models to compute odds ratios (OR) and 95% confidence intervals (95% CI) for the association between decreasing CpG site DNA methylation in prostate tumor tissue (modeled continuously) and GS (7 vs. 6) overall and separately by race (African American and Caucasian). Additional adjustment for age at surgery, year of surgery or imputed immune cell distribution in the tumor tissue from CIBERSORTx software [38] did not appreciably alter the results, and thus these variables were not adjusted in the final models. We evaluated the interaction between each CpG site and race by including CpG site × race as a cross-product term in the model and testing for significance. We also used a similar approach to assess the interaction between each TC and race with regard to GS. We considered an interaction p-value < 0.20 as noteworthy (even if not statistically significant). For all analyses, we defined statistical significance as a p-value < 0.05. We performed principal components analysis to assess potential clustering patterns in the RNA expression data by tissue type (tumor/normal), array run, RNA extraction batch, Positive vs. Negative Area Under the Curve (AUC), which is a QC measure computed by the TAC software, RNA quality (based on delta Cq) and covariates including age at surgery and year of surgery. We did not observe much evidence of clustering for most of the factors examined except for array run, Positive vs. Negative AUC and RNA quality (data not presented). To assess the association between MYC DNA methylation and RNA expression in tumor tissue, we ran separate linear regression models for each of the 22 TCs of interest (dependent variable) in relation to each CpG site (independent variable). We computed q-values reflecting the false discovery rate after multiple comparison adjustment (22 models for each CpG site) using the Benjamini and Hochberg method [39]. Given the clustering in the RNA expression data noted above, we assessed the impact of additional adjustment for array run, Positive vs. Negative AUC and RNA quality (as well as age at surgery and year of surgery) in our models; however, these variables did not appreciably alter the results, and thus were not adjusted in the final models. As a sensitivity analysis, we also ran additional linear regression models restricted to the samples of highest quality (Positive vs. Negative AUC > 0.7) to ensure that the results were not unduly influenced by sample quality. Study Population Characteristics Among the 89 prostate cancer patients from the University of Maryland Medical Center who had available MYC DNA methylation data for analysis, age at radical prostatectomy ranged from 42-75 years, with a median age of 58 years; the distribution of age at diagnosis was similar to the age at surgery ( Table 1). Year of surgery ranged from 2001 to 2017. Our study sample was about evenly divided between Caucasian and African American men (i.e., 52% and 48%, respectively) and between men with GS 6 and those with GS 7 tumors (45% and 55%, respectively) based on our study design. Of the 49 men with GS 7 tumors, a greater proportion had a Gleason pattern of 3 + 4 than 4 + 3 (Table 1). Most men (67%) had a pathologic tumor stage (pT stage) of 2, 25% had a pT stage of 3 and 8% had missing pT stage information. Of the patients with known nodal involvement, 95% were classified as N0 as expected for this radical prostatectomy population. The median preoperative prostate-specific antigen (PSA) concentration was 6.2 ng/mL (interquartile range (IQR): 4.8, 8.0). The men with GS 6 tumors were generally similar to those with GS 7 tumors with respect to demographic characteristics, except that the GS 6 patients tended to have radical prostatectomy in earlier years (Table 1). MYC DNA Methylation CpG-CpG Correlations and Prostate Tumor-Normal Differences DNA methylation levels at the six MYC CpG sites evaluated in the University of Maryland samples were moderately correlated in tumor tissue (Spearman rho: 0.27 to 0.66, p-value < 0.01 for all CpG site pairings). Correlations were less pronounced in the normal tissue samples (Spearman rho: −0.08 to 0.57), with fewer results achieving statistical significance at the 0.05 level. In the normal prostate tissue samples from the University of Maryland, % DNA methylation tended to be relatively high for the six CpG sites evaluated, with the exception of CpG 217 (chr8:128753221), which was the one CpG site evaluated in the 3 UTR of MYC and which displayed a median of 66.66% ( Table 2). The median % DNA methylation in the normal samples for the other five CpG sites (located in MYC exon 3) ranged from 83.39% to 89.72%. When we compared paired tumor and normal prostate tissue samples from the same individual, we observed significantly lower MYC DNA methylation for five of the six CpG sites in the tumor than the normal samples (p-value < 0.05). The sixth CpG site (CpG 212 at chr8: 128753145) displayed a similar pattern, with a borderline significant p-value (p = 0.06). The median tumor-normal difference in % DNA methylation for the six CpG sites ranged from −14.74 to −0.20 percentage points (Table 2). Similar results were observed for the two nearby CpG sites evaluated in TCGA, with both sites showing significantly lower methylation in tumor compared to normal samples (median differences of −6.89 and −5.46 percentage points, respectively; Table 2). Median tumor-normal differences in the University of Maryland samples tended to be similar between Caucasian and African American men. The number of African American men with paired tumor and normal tissue in the TCGA dataset was too small (n = 3) for meaningful comparison by race. MYC DNA Methylation Differences by GS in Prostate Tumor Tissue Among the University of Maryland tumor samples, we observed significantly lower MYC DNA methylation for more aggressive tumors (GS 7) than less aggressive tumors (GS 6) for three exon 3 CpG sites (for CpG 212 (chr8:128753145), median % DNA methylation for GS 6 group = 89.7%; median for GS 7 group = 85.8%; p-value = 9.4 × 10 −4 (Figure 2)). We also observed a similar result for another exon 3 CpG site, CpG 213 (chr8:128753151), which approached statistical significance (p = 0.072) (Figure 2). For these four CpG sites, the median difference between the GS groups ranged from 3.7 to 4.1 percentage points. We observed no significant differences by GS for the two CpG sites evaluated in the TCGA dataset (p > 0.05 for both) ( Figure A1 in Appendix A). Among the CpG sites that displayed a significant overall association with GS, we observed some evidence of a stronger association between lower DNA methylation at CpG 212 (chr8:128753145) and higher GS for African American (OR = 1.23, 95% CI: 1.04-1.45) than Caucasian men (OR = 1.07, 95% CI: 1.00-1.15; p-interaction = 0.15; Table 3). RNA Expression Tumor-Normal Differences and Differences by GS in Prostate Tumor Tissue We evaluated differences in the log2-transformed RNA expression levels by tumor/normal sample status for 22 TCs for MYC and other genes/ncRNAs within 1 Mb of MYC using paired tumor-normal prostate tissue samples from the 100 University of Maryland patients (Table 4). Five genes or ncRNAs were significantly upregulated in tumor compared to normal prostate tissue samples, including MYC, PRNCR1, RP11-382A18.3, POU5F1B and RNU4-25P, whereas one long non-coding RNA (lncRNA), RP11-255B23.1, was significantly downregulated in tumor compared to normal prostate tissue (p < 0.05). Median tumor-normal differences in expression tended to be similar between Caucasian and African American men (data not presented). Among the 100 tumor tissue samples, only two of the genes/ncRNAs evaluated displayed differential expression between GS 6 and GS 7 samples, which included CCAT1 and RNU4-25P (Table 4). For CCAT1, the median log2-transformed expression level (IQR) was 3.5 (3.3, 3.9) for the GS 6 group and 3.2 (3.0, 3.6) for the GS 7 group (p-value = 1.5 × 10 −3 ); for RNU4-25P, the median log2-transformed level (IQR) was 5.8 (5.5, 6.2) for the GS 6 group and 5.3 (4.9, 5.6) for the GS 7 group (p-value = 3.6 × 10 −5 ; Table 4). Notably, RNU4-25P also showed a significant interaction with race with respect to GS (p-interaction = 5.6 × 10 −3 ), such that there was a significant association between tumor RNU4-25P expression and GS among Caucasian men, but not African American men (data not presented). Prostate tumor MYC DNA methylation and RNA expression in prostate tumor tissue When we evaluated prostate tumor MYC DNA methylation in relation to RNA expression in prostate tumor tissue in the University of Maryland samples, we did not observe a significant association between any of the six CpG sites and MYC expression (data not shown). Our top findings (p ≤ 0.05) suggested inverse associations between MYC DNA methylation and expression of two lncRNAs at chromosome 8q24, PRNCR1 and CASC11, and a positive association between MYC DNA methylation and miR-1206 expression (Table 5). Notably, the association between DNA methylation at CpG 216 (chr8:128753200) and PRNCR1 expression remained statistically significant after adjustment for multiple comparisons (β= −0.022, s.e. = 0.007, p-value = 2.0 × 10 −3 , q-value = 0.04; Table 5). The effect size and direction of the associations remained similar when we restricted our analysis to the 56 prostate tumor samples with the highest quality (Positive vs. Negative AUC > 0.7) (data not shown). The remaining genes and ncRNAs evaluated were not significantly associated with MYC DNA methylation (p > 0.05). Discussion Based on the importance of the 8q region and the MYC gene in prostate carcinogenesis/progression [13,40], we investigated the role of MYC DNA methylation (six CpG sites spanning from exon 3 to the 3 UTR) in prostate tumor tissue from African American and Caucasian prostate cancer patients who underwent radical prostatectomy at the University of Maryland Medical Center. Comparing MYC DNA methylation between paired tumor-normal prostate tissue samples from the same men, we found lower DNA methylation in the tumor compared to the paired normal samples for all six CpG sites evaluated, suggesting that these may represent somatic alterations in prostate tissue. Similar patterns were observed for the two nearby CpG sites in exon 3 that were evaluated in TCGA. Interestingly, we also observed lower MYC DNA methylation for more aggressive (GS 7) compared to less aggressive (GS 6) tumors for several exon 3 CpG sites. When we incorporated RNA expression data in prostate tumor tissue, we observed significant inverse associations between tumor MYC DNA methylation and expression of the lncRNAs PRNCR1 and CASC11, and a significant positive association with miR-1206 expression. Notably, our finding of an association between CpG 216 (chr8:128753200), one of the CpG sites that displayed a significant association with GS, and PRNCR1 expression also remained significant after adjustment for multiple comparisons. We did not observe evidence of an association between MYC DNA methylation at any of the CpG sites evaluated and MYC expression. A number of epigenome-wide studies have compared DNA methylation at specific CpG sites across the genome between prostate tumor and normal tissue [41][42][43][44][45][46][47][48][49][50][51], with a smaller number of epigenome-wide studies comparing DNA methylation in prostate tissue with respect to GS [52][53][54]. In these studies, MYC was not among the top hits, but few, if any, evaluated the specific CpG sites that we evaluated in the present study. For example, most of the previous studies, including TCGA, used array-based technologies such as the Illumina HumanMethylation450 BeadChip that did not cover these specific CpG sites. Another group that used MethylPlex technology with massively parallel sequencing evaluated MYC in prostate tumor tissue, but focused on the promoter region and did not evaluate exon 3 [42]. Thus, to our knowledge, our study is one of the first to study these specific MYC CpG sites in prostate tissue. Exon 3 in MYC is highly conserved across species, and we previously reported a variety of characteristics that are common to transcriptional regulatory regions within 2 kb of the MYC exon 3 CpG sites in various prostate cancer cell lines using ENCODE data, which included DNaseI HS peaks, histone methylation and acetylation marks and TFBS for ETV1 and TCF7L2 [28]. Another study using ENCODE data also provided evidence of CHD2 and CTCF transcription factor ChIP-seq clusters nearby MYC [55]. CTCF is a multifunctional transcription factor involved in gene expression and gene regulation, and CTCF binding is highly sensitive to DNA methylation [55]. A previous study found lower MYC exon 3 DNA methylation in human myeloma cell lines when compared with normal lymphocytes, which correlated with increased expression [30]. Moreover, hypomethylation at MYC exon 3 in colorectal tissue was shown to be associated with progression from normal tissue to metastatic disease [31]. Although these latter studies did not focus on prostate cancer, they implicate a potential role of MYC exon 3 hypomethylation in the development or progression of cancer, which is consistent with the directions of our findings with regard to prostate tumor-normal tissue differences and differences by GS in prostate tumor tissue. Although the results should be interpreted cautiously due to relatively small numbers, it is interesting that we observed some evidence of a stronger association for MYC DNA methylation at CpG 212 (chr8:128753145) and GS for African American than Caucasian men. Specifically, while the direction of the association was consistent for the two race groups, the association achieved statistical significance in African American men, but was slightly smaller in magnitude and borderline significant in Caucasian men. Investigations of genetic susceptibility loci at 8q24 in the African Ancestry Prostate Cancer GWAS Consortium identified several loci that were specific to African American men, suggesting the potential for different mechanisms at 8q24 contributing to prostate cancer for African American men when compared with men of other ancestries [10]. Our findings, although warranting replication with larger sample sizes, support the need for continued investigation into the potential differing roles of the 8q24 locus in prostate cancer by race/ancestry. While the present study using prostate tissue among men with prostate cancer and our previous case-control study using pre-diagnostic peripheral blood DNA are not directly comparable due to differences in study design and the timing and type of samples, we note that the direction of association between MYC DNA methylation and aggressive prostate cancer observed in the present study was opposite to that reported in our previous study [28]. The differences in our findings may be due, in part, to differences in DNA methylation by tissue type as it is well established that DNA methylation displays distinct patterns by tissue [56], and may also be due in part to changes in DNA methylation over time due to the carcinogenic process [57]. We observed clear differences in DNA methylation between normal and tumor tissue in this study, suggesting that a somatic alteration occurred. We would not expect DNA methylation patterns in pre-diagnostic lymphocyte DNA to reflect somatic DNA methylation alterations in prostate tissue, such as those identified by our present study. In the present study, we had the advantage of being able to incorporate RNA expression data so we could investigate the potential function of alterations in MYC DNA methylation in prostate tissue in the context of RNA expression modulation. Previous re-search demonstrated the importance of MYC promoter methylation and signaling in the regulation of miR-27a-5p in prostate cancer [58], providing impetus for further study of MYC DNA methylation with regard to other genes/ncRNAs. Our top associations (p < 0.05) for MYC DNA methylation and RNA expression included inverse associations with PRNCR1 and CASC11 and, notably, the association between MYC DNA methylation at CpG 216 (chr8:128753200) and PRNCR1 remained significant after adjustment for multiple comparisons. Prostate cancer-associated non-coding RNA 1 (PRNCR1) is a lncRNA that has been proposed to contribute to prostate carcinogenesis by increasing the looping of androgen receptor (AR)-bound enhancers to AR target genes and, in turn, increasing cell proliferation [59]. Previous studies have observed up-regulation of PRNCR1 expression in prostate cancer cell lines and in the precursor lesion prostatic intraepithelial neoplasia [15]. Consistent with these findings, we also observed higher expression of PRNCR1 in prostate tumor compared to paired normal tissue samples in our study. Cancer susceptibility candidate 11 (CASC11) is another lncRNA that has been shown to be overexpressed in prostate cancer [16], although we did not detect a difference in CASC11 expression between the tumor and normal samples in our study. Our observations of lower MYC DNA methylation in tumor tissue and inverse MYC methylation associations with PRNCR1 and CASC11 thus appear to be in line with the biology of these ncRNAs (i.e., if MYC methylation is lower in tumor than normal tissue and inversely associated with these lncRNAs, then we would expect to see increased expression of these lncRNAs in tumor, as we and others have observed). Our other top finding (p < 0.05) regarding MYC DNA methylation and RNA expression was a positive association between MYC methylation and miR-1206 expression, although this finding did not persist after multiple comparison adjustment. MiR-1206 is part of a lncRNA transcript of the PVT1 gene and, notably, was found to have lower expression in prostate tumor compared to normal tissue in a previous study [60]. The directions of our observed associations (i.e., lower MYC methylation in prostate tumor than normal tissue and positive association between MYC methylation and miR-1206 expression) thus also appear to fit with the biology of miR-1206. Our finding of no association between MYC DNA methylation and MYC expression in prostate tumor suggests that if MYC methylation influences the expression of the above ncRNAs, then it is unlikely to be through an effect on MYC expression. Instead, it is possible that there is a direct effect of MYC methylation on expression of the ncRNAs. There is some plausibility for this hypothesis based on examples of long-range interactions at chromosome 8q24, which influence gene expression, for example, an interaction between rs378854 (a SNP in linkage disequilibrium with rs620861, an established prostate cancer susceptibility SNP) and PVT1 [61]. While our findings provide some clues about the potential function of alterations in MYC methylation, it is also possible that DNA methylation alterations in this region do not directly affect RNA expression, and that these are independent changes that are both correlated with the carcinogenetic process. Further research is needed in in vitro or animal models to assess if there is a causal effect of MYC DNA methylation alterations on RNA expression at 8q24 and to clarify the underlying mechanisms. There were several limitations in our study. Our overall sample size was relatively small and we only included radical prostatectomy patients with a GS of 6 or 7. It would have been preferable to compare patients with GS of 8 or higher (instead of GS 7) to those with GS 6 for better contrast (which we hypothesize would have led to larger effect sizes); however, there were relatively few patients with a GS of 8 or higher in our patient population. We expect this may be due, in part, to the fact that we drew our study participants from radical prostatectomy patients and this surgery is predominantly indicated for patients with localized disease, who usually have a lower GS. To help reduce this limitation, we focused on tumor regions with a Gleason pattern 4 where possible for patients with GS 7 to increase the contrast with our comparison group (GS 6, typically involving pattern 3 only). We only evaluated DNA methylation for a small number of CpG sites in MYC, for which we had a priori rationale; future studies would benefit from a more comprehensive investigation of DNA methylation across the MYC genome, as well as replication of the results for the specific CpG sites of interest in an independent population. While inclusion of the TCGA data allowed us to evaluate two CpG sites nearby, we cannot directly compare our results from the University of Maryland samples with TCGA because different CpG sites were evaluated (the sites evaluated in the University of Maryland samples were not covered by the array used in TCGA) and the extent of correlation between the CpG sites is unknown. There were also some ncRNAs that we were unable to evaluate in our analysis because they were not covered on the Clariom D array; however, we were able to evaluate expression for most genes and ncRNAs within 1 Mb of MYC, providing insight regarding potential mechanisms. We were also unable to investigate the main effect of race on MYC DNA methylation due to our study design, such that we fixed the distribution of GS within each race group, which may have made the distribution of DNA methylation more similar between the race groups as well. However, the rationale for this design is that it maximized our power to separately evaluate the association between MYC DNA methylation and GS by race, which was one of our study aims. Our study also had several strengths, including the integration of epigenetic and transcriptomic data. By incorporating RNA expression data from the Human Clariom D array, we were able to evaluate the association between MYC DNA methylation and RNA expression for MYC and a variety of other genes and ncRNAs of interest nearby. Other strengths were: (1) our diverse study population, which allowed for separate study of African American men, who bear a disproportionate burden of aggressive prostate cancer and prostate cancer mortality, but have been traditionally understudied; (2) use of GS as an outcome, which is an established intermediate prognostic marker for prostate cancer [29]; (3) centralized pathology review of all samples by a single pathologist; and (4) use of pyrosequencing, a reproducible and quantitative method in detecting inter-individual differences in DNA methylation. In summary, focusing on CpG sites from exon 3 to the 3 UTR of MYC, our results suggested lower MYC DNA methylation in prostate tumor compared to paired normal prostate tissue, and lower exon 3 DNA methylation for more aggressive compared to less aggressive tumors. Although numbers were relatively small, we also noted some evidence of a stronger association for African American than Caucasian men for one of the CpG sites that displayed a significant overall association with GS, highlighting the importance of continued investigation of prostate cancer biomarkers separately by race. Incorporation of RNA expression data allowed for initial investigation into the potential function of these DNA methylation alterations. More research is needed to replicate these findings and further explore the relationship of MYC DNA methylation with aggressive prostate cancer, including the function, timing and stability of these alterations over the course of prostate carcinogenesis, and their potential role as passengers or drivers. Further research is also needed to elucidate differences in prostate tumor biology by race/ancestry, which may help provide insight into differences in disease aggressiveness and disparities in prostate cancer outcomes. Conflicts of Interest: L.Y., M.P. and A.M. are employed by EpigenDx, Inc, and L.Y. is the major stockholder of EpigenDx. All other authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.
2020-12-31T06:18:17.535Z
2020-12-24T00:00:00.000
{ "year": 2020, "sha1": "e9cbddfe4221ca40f052ef4b7d2896b4a4c707b9", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4425/12/1/12/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ac515fa9d1595039263e34f20cfc83f8291d1867", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
114442685
pes2o/s2orc
v3-fos-license
An analysis of the benefits of ethnography design methods for product modelling The essence of modelling is to reflect the studied piece of reality in such a way that best describes the selected elements of the designed system. A model is used in design to optimize the structure and parameters of the constructed object and is a tool for assessing the quality of construction, eliminating weak links and ensuring adequate safety components. In view of the aim of modelling, it can be divided into functional modelling, showing the complexity of the object, and reliability modelling, specifying its states at variable threshold values. In design, modelling allows for significant savings in resources that would otherwise be spent because of problems appearing at the prototype stage, but also during production or in the course of using the product. In the practice of ergonomic design many problems could be avoided if early enough in the design process the values of parameters and their relations would be taken into account through modelling. On the other hand, the modelling process can be costly and time-consuming to carry out, and against the currently pervasive lean production it is a highly undesirable factor. Therefore, the modelling process should be supported with the use of appropriate cognitive techniques namely ethnography design, which would determine inadequacies of existing models as well as indicate the equivalent conditions for modelling. The justification of the use of this technique results both from the possibility of providing additional information, as well as the opportunity to “test” the phenomena affecting the design process. Ergonomic modelling tests developed solutions towards their adaptation to users’ anthropometric, biomechanical and psychomotor characteristics, as well as behaviour patterns. However, knowledge of the latter and achieving a sufficient ergonomic and functional quality of proposed solutions often requires the use of the ethnography design approach. The aim of this article is to test the practical application of ethnography design methodology in product design and to analyse the benefits of its use. The analysis is based on effects of its application with the support of product design from various industries, along with a discussion of the method's limitations. Among benefits of ethnography design, the greatest proved to be providing knowledge of nonspecific user behaviour previously unknown to designers, which when rendered by models allowed to develop innovative solutions. Introduction In the broad sense the essence of modelling is to reflect the studied piece of reality in such a way that best describes the selected elements of the designed system. A description of the elements gives knowledge about how they operate in different variable conditions and the ability to test the values Content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. Published under licence by IOP Publishing Ltd 1 they adopt. It is not different in the case of ergonomic modelling for the design of products, which are to respond to the needs of specific groups of users. A model is used in design to optimize the structure and parameters of the constructed object and is a tool for assessing the quality of construction, eliminating weak links and ensuring adequate safety components [1]. In view of the aim of modelling, it can be divided into functional modelling, showing the complexity of the object, and reliability modelling, specifying its states at variable threshold values [2]. Often there are performed for achieving specified feature of the system [3]. Depending on the way of reproducing reality, models may be mental and material, which due to the possibility of testing in similar material conditions have a greater cognitive value. In design, modelling allows for significant savings in resources that would otherwise be spent because of problems appearing at the prototype stage, but also during production or in the course of using the product. Models also have an invaluable cognitive nature, because they allow for testing of a particular scale and complexity of a specified slice of reality. However, in order for them to be used, it is necessary to know the relationships and parameters that are necessary for a possible complete description of the assumed fragment of reality. In the practice of ergonomic design many problems could be avoided if early enough in the design process the values of parameters and their relations would be taken into account through modelling. On the other hand, the modelling process can be costly and time-consuming to carry out, and against the currently pervasive lean production it is a highly undesirable factor. Therefore, the modelling process should be supported with the use of appropriate cognitive techniques namely ethnography design, which would determine inadequacies of existing models as well as indicate the equivalent conditions for modelling. The justification of the use of this technique results both from the possibility of providing additional information [4], as well as the opportunity to "test" the phenomena affecting the design process. Need for supporting product design through modelling Modelling is a means to achieve a greater quality of the designed solutions, and this commitment is a fundamental duty of every designer [5], which is consistent with the trend of corporate social responsibility [6]. It is worth noting that a product, being any artificial object constituting the result of processes and used to address the specific needs and expectations of the user, may have a modular structure, of which the "real" part can be only a fragment of the whole. Besides the physical structure, the product will have a functional structure, which in most of today's products will manifest as its mechatronic property. This implies the need for modelling on several planes, reflecting the states of all elementary members. An additional difficulty in this regard is the need to reflect nonlinear relations, which are characteristic in work systems [7]. Product modelling is a necessary function of the design process due to its ability to provide [8]:  possibility of carrying out analyses of individual elements joined to make a whole, taking into account the conditions of individual parameters, e.g. stress analyses or value chain analyses within the manufacturing process, modelling allows for the inclusion of elements of the whole in a structured mathematical model -quantity models -Computer Aided Engineeringapplied by for example, Abaqus/Explicit,  prototyping capabilities, which may rely on kinematic functionality, ergonomic verification, analyses of the possibility of assembling individual components, or the dynamics of elements and units -which involves a qualitative analysis of the proposed solutions -quality models, which can occur at different levels of generality. More detailed modelling processes can be targeted to achieve a particular specific result, for example: clarify and simplify the interaction between elements, confirm the fulfilment of certain restrictions on the process, verify the accuracy of the new element of the system before its implementation. Due to the formalization of processes in statistical analysis and reliability modelling, even those of a nonlinear nature, in the following sections mainly qualitative models will be analyzed. Of particular interest in the field of product modelling is attaining functional quality [7], which is generated during the confrontation of expectations of the user with the quality provided by the product and the product"s impact on the user through its ergonomic action, which is the sum of many interconnected components [10]. This type of modelling is particularly applicable in the case of products which have been altered in their essence from previous versions (lack of abilities to verify the evolution), or the use of solutions on a new audience. Essence of ethnography approach in design The ethnographic approach is a method which helps to understand the actions of people in order to better adapt newly designed products, services and processes [11]. At the same time, ethnography can be used to examine the effects of implemented changes in the products, services or processes [12]. thnography design is primarily ergonomic because it solves the issue of designers having knowledge of the principles of constructing products but lacking insight into the actions and operation of people [13]. Meanwhile, the implemented products change the behaviour of humans on which they have an impact. thnography design allows to control the ergonomic quality of products through an in-depth study of the functionality and user interaction with the product [14]. Between the fields of design and ethnography, there exist also many ways to integrate knowledge and experience, e.g. through joint training of specialists in both fields, exchange of knowledge and experience in the design process, mutual participation of designers in the observations. Also, actions in the field of ethnography design are focused on the appropriate methods of interaction, which will not be suggestive to users and will allow to obtain information [15]. The main principles for conducting research using ethnography are based on four assumptions organizing and distinguishing ethnography from other methods [13]: 1. natural environment -ethnography is based on carrying out research in the field, among the target audience; it should be understood that there is a commitment in ethnography to investigate the activities of people in their everyday living and working conditions, 2. holistic approach -stems from the belief that learning about real problems in the so-called "present state" and understanding the rules of conduct and choices of people can be possible only through the observation of reality in the whole context of work and life of humans, 3. descriptive approach -during ethnographic observations, the researcher takes notes or records videos to later be able to analyze the observed events and findings that are relevant to the research. It is important that during the observation one retains a "non-critical" attitude, 4. "from the point of view" approach -involves an attempt to empathize with the observed person or technical object or phenomenon and to attempt to understand how a given person perceives their space and task; this approach has a great advantage over survey methods, because, as opposed to the survey, it is not based on pre-established findings. The first two of these characteristics may seem contradictory to the idea of modelling, because it implies in the majority of cases a departure from the natural environment as well as it is not always possible to carry out a holistic analysis of a system. This results in the need to use modelling based closely on the natural design environment and a comprehensive coverage of the tested phenomena. Found in literature are the following ways ethnography could be involved in technology design [16]:  identifying "sensitizing" concepts,  developing specific design concepts,  driving innovative technological research. Another role which can be played by ethnography is evaluating design [17], where the use of ethnography is conducted as a common sense check up on the design. Another important function of ethnography in design can be context awareness [18]. This feature is particularly important in cases where the designer receives guidelines regarding design, not knowing its environment, which will be important to carry out specific functions in reality. In ethnographic practice observations are a valuable source of data also because users are unable or unwilling to articulate their specific needs or concerns. It commonly happens that people deliberately mislead, are confused, are afraid to say what they really think, e.g. for fear of losing their jobs, they have no knowledge on the topic and thus are not able to speak, they have problems with making contact, or want to be liked and distort their real thoughts, often unconsciously. This is the main 3 motivation for the application of observation and the result is that ethnography design can be applied in the design of products for the elderly [19]. Application of ethnography design in product modelling The assessment of the degree of representation of reality by the created model in the process of product design is in fact an assessment of the prescribed quality of specific patterns of action of a specific group of users to the functional diagram - Figure 1. Figure 2. Ethnography design has its uses in the development of user requirements during the initial stages of the design process, which are processed into a list of requirements, which in turn is analyzed through the use of simulations. Figure 2. Typical ethnography design application In the majority of cases, the use of such a solution has practical considerations. Very often a needs analysis is accompanied by an observation of the existing conditions (ethnographic approach), and from here any identified needs of users are derived. However, this leads to a lack of opportunities to test the proposed solutions, and the simulation applies only to those aspects which are considered important to the designer. By way of simulation, models that allow to examine reality tend to be developed, but not tested is their quality in terms of reflecting reality beyond the selected aspectswhich forms the main assumption of classical modelling. Figure 3 shows a slightly different structure, which forms an extension of simulation conditions by the addition of elements of testing the real application of proposed solutions by the target users. On the basis of these activities, new variables or design parameters can be determined and a multi-criterial representation in terms of individual elements can be evaluated. This approach requires contact of designers with the tested group of users that could be subjected to a simulated observation while using a part of the solution. Such an approach is possible with the use of modern production techniques (e.g. 3D printers). There is also the possibility of creating equivalent structures, which in certain limited areas will simulate an ethnographic verification of solutions, but will not allow for the disclosure of unknown applications. Such a structure equivalent to the real, can be a mechanical or logical system similar in operation to the real -the shaking of hands when placing a key in the lock can be replaced by a mechanical oscillator which will verify inertness during the use of a lock by a person with reduced manual dexterity. However, only ethnography design may lead to the discovery of specific ways of actions of users related to such a dysfunction, such as the need to limit the turning force of the key, which may not be detected due to it being covered by the first need of the user. In the case of applying ergonomic modelling in unclear conditions, the quality of representation could be reliably assessed, e.g. in modelling the adaptation of an anthropometric human and other objects and the ability to exert forces excluding unspecified functional overlaps of dynamic anthropometry. In turn, modelling in a more theoretical-cognitive sense will be possible with the use of ethnography design only at the level of identification during model creation, and its verification will not be possible, as is the case with modelling error [20]. An interesting modification of the ethnographic approach is the use of technical means such as video-recorders that can recognize certain states in the real system [21] and transfer them to the theoretical model. A more complex aspect is to assess the quality of the transfer of reality during modelling of heuristic and thought processes, in this area one can rely on artificial neural networks [22], however, the quality of representation with the use of ethnographic approach may rely only on principles similar to the "black box," as an assessment of the degree of conformity between input and output parameters for both systems. Need for ethnography design in product modelling Ethnography design has significant advantages in the modelling process, however there are contraindications to its use due to it increasing the time intensiveness of the design procedure and the possible obstruction to proceedings. The identified needs for ethnography design are presented in terms of the various stages of the design process - Table 1. As indicated in Table 1, the most relevant application of the ethnographic approach in design is in the early stages of design, while establishing the criteria and usage needs. In subsequent phases, when it comes to limiting the number of concepts, overlooking crucial and hidden needs of users may result in that the project does not achieve the right level of functional and ergonomic quality. It should therefore be noted that the use of the ethnographic approach in ergonomic design beyond the obvious initial identification phases occurs in the case of:  complex solutions, with an unspecified level of quality,  solutions for a broad group of diverse users, with different cultural norms and procedures,  designer"s lack of knowledge concerning the environment of the proposed solutions or the existence of significant levels of uncertainty in his understanding,  information regarding undetermined problems in the application of the proposed solution by a specific group of users. The applicability of the ethnographic approach is also dependent on the available resources of time and money, however in most cases the effect obtained with its help will exceed the incurred costs. Practical application and benefit analysis of the ethnography design approach The vast majority of researchers examining the ethnographic approach in design focuses on the characteristics of the method, its uses in teaching [26], describing examples of applications [27] and not considering simultaneously the practical applications as well as the scale of the achieved results. This is justified due to the complexity of comparative studies that would allow to reliably evaluate the effectiveness of two design approaches differing only in the application of ethnography design elements, however, this does not convince a wide group of designers to apply the approach. The effectiveness of this approach can therefore be measured only on the basis of an absolute evaluation accumulating probably many other factors. Virtual reality is an area of particular applicability of the ethnographic approach, though this concerns not so much the group of designed products, but tools supporting design [28]. Often the ethnographic approach is indicated as particularly adept at discovering new products [29] or especially for data acquisition during the design process, which the designer does not have [30]. A significant problem in this regard is the belief of designers that they do not have enough information to design a solution. The ethnographic approach is also indicated as one of the key elements of the success of the Xerox company due to the creation of a research team -Xerox PARC [31] which was responsible, among other things, for the development of an interface scheme based on a significantly highlighted quick copy button, adapted later by all manufacturers of copying machines. This and many other design solutions based on the principles of ethnography design indicate the approach"s potential, but in many cases it is not possible to distinguish the effect of the applied changes because of the uniqueness of design processes and the variety of the means used to achieve a product with a high functional quality. Conclusions The typical application of the ethnography design approach in the design process has transferred from the initial stages to the next, allowing for the testing of solutions on a smaller sample of users prior to releasing the finished solution to the market. Such concurrent modelling of the analyzed reality on further, and at the same time sufficiently early, stages of design can allow for the ability to choose better decisional variants, which could have been eliminated due to the lack of knowledge of all the testing environment"s parameters. thnography design is also associated with ethical issues and is in line with the idea of participatory design. It should be noted, however, that it must be complementary with respect to the standard stages of design, and the selection of the research sample must allow for achieving results that do not disturb the various partial objectives. Selection of appropriate target groups is crucial, because there is a great danger to adapt non-standardized properties, which are rare in a group of users and that can exclude others -the so-called design antinomies. For this reason, taking on the ethnographic approach can also be difficult, however, the possibility to supplement the shortcomings of standard modelling is worth the risk. A salient issue for future studies is to demonstrate the effectiveness of design enriched with the ethnographic approach, which is particularly important because of the time and cost effectiveness of modern design, which seeks, using approaches such as concurrent design, to maximally condense the design process. The problem in this regard may be the necessity to use qualitative changes in the assessment of design procedures, which is always less attractive than quantitative changes (e.g., in shortening the design process). Perhaps the solution to this problem is to widen the universal approach with the ethnographic paradigm, which for declarative statements about the need for participatory approaches would allow to develop methodologies ensuring this participation.
2019-04-15T13:06:29.373Z
2016-08-01T00:00:00.000
{ "year": 2016, "sha1": "aabb834bcbcb14a14fce701876cc1f22114641ff", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/145/4/042023", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "12d19b8faab4b356402eb536f94dcbbc46adeed3", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Engineering" ] }
261676009
pes2o/s2orc
v3-fos-license
Pesticides in vegetable production in Bangladesh: A systemic review of contamination levels and associated health risks in the last decade This paper reviewed the published data on the levels of different pesticide residues in vegetables (tomato, eggplant, beans, gourds, cauliflower, cabbage, cucumber, potato, carrot, onion, red chilli, red amaranth, lady's finger, spinach, coriander, and lettuce) from Bangladesh in the last decade. Vegetable production in Bangladesh has increased tremendously (37.63%) compared to the last decades, along with its pesticide use. The most observed pesticide groups used in vegetable production were organophosphorus, pyrethroids, carbamate, organochlorine, nereistoxin analogue group, and neonicotinoids. More specifically, chlorpyrifos, dimethoate, diazinon, and malathion were the most used pesticides. More than 29% of the vegetable samples (1577) were contaminated with pesticide residue; among the contaminated samples (458), most cases (73%) exceeded the maximum residue limits (MRLs). The pesticide-contaminated vegetables were cucumber (51%), tomato (41%), cauliflower (31%), miscellaneous vegetables (36%), eggplant (29%), beans (23%), cabbage (18%), and gourds (16%). Among the pesticide-contaminated samples, vegetables with above MRL were gourds (100%), beans (92), tomato (78%), eggplant (73%), miscellaneous vegetables (69%), cucumber (62%), cabbage (50%), cauliflower (50%) (p < 0.05). It was also observed that a single vegetable was often contaminated with multiple pesticides, and farmers did not follow a proper withdrawal period while using pesticides. Hazard quotation (HQ>1) was observed in adolescents and adults in tomato, eggplant, beans, cauliflower, cabbage, cucumber, lady's finger, lettuce, and coriander. There was no health risk observed (HQ<1) in gourds, potato, carrot, onion, red chilli, red amaranth, spinach, and okra. The highest acute and chronic HQ (aHQ, cHQ) was observed for cypermethrin (bean) in adolescents (aHQ=255, cHQ= 510) and adults (aHQ=131, cHQ=263). It was also observed that these pesticides harmed air, soil, water, and non-target organisms. Nevertheless, the review will help the government develop policies that reduce pesticide use and raise people's awareness of its harmful effects. Introduction Bangladesh is an agrarian country where the agriculture sector plays a pivotal role in the national economy.About 80% of the people of this country live in rural areas, and agriculture is their primary livelihood source.The agriculture sector is the most important and comprises about 13.02% of the national GDP (Gross Domestic Product), which employs around 40.60% of the total labour force [22].The performance of this sector significantly impacts national development, employment generation, poverty alleviation, income inequality, food security, nutritional attainment, and so on [53]. Bangladesh is endowed with fertile soils and favourable climatic conditions for producing various crops throughout the year [84].Since independence in 1971, the food production of this country has increased tremendously.In the early years, people were primarily interested in producing rice-based crops [57].But now, the scenario is different as people are more interested in growing various other high-value crops [43].Thus, the government of Bangladesh has called for a departure from "rice-led" growth to a more diversified production that includes several non-rice crops like vegetables, maize, legumes, livestock, and so on [57]. Vegetables are cultivated worldwide by large commercial growers to small subsistence farmers [38].Farmers usually cultivate vegetables with a high price in the market to gain economic solvency.In Bangladesh, vegetable cultivation increases day by day as people are more conscious about their healthy diet [2].Although vegetable farming was only performed in the early years at the household level, now it has moved from the household to the commercial field level [55].The production of vegetables has more than doubled over the years, making it one of the fastest-growing vegetable producers in the world [2].Compared to other crops, vegetables are far more beneficial to farmers.It helped to generate cash for the growers and vibrant the rural economy [91]. Vegetables are an integral part of our healthy diet.Vegetables have low fat and calories, high vitamins (A, B1, B6, B9, C, E), minerals, dietary fibre, and phytochemicals [66,67].There is little chance of malnutrition when people take enough vegetables into their diet [55].But vegetables can be the reason for health hazards when contaminated with different chemicals such as pesticides.Farmers use pesticides to protect vegetables from insects, pests and disease attacks.If farmers do not maintain the withdrawal period, the pesticide residue will remain in the vegetables and cause harm to the consumers. Over the last decades, many studies have been conducted to determine the pesticide residue in vegetable production in Bangladesh [10,101,102,23,4,5,53,6,63,64,82,90].But the data were either on one pesticide group, a single pesticide, or a combination of pesticides in a single vegetable or a group of vegetables.For a complete scenario of pesticides used in Bangladesh, the overview of all pesticides in vegetable production needs to be summarized.This document summarizes the results of those studies and shows the actual scenario of pesticide contamination in vegetables.Over the last decades, vegetable production in our country increased tremendously compared to the other decade.Besides vegetable production, pesticide use has also increased in the last decades.This study is to make the scientific community in Bangladesh realize the need for further research to generate a comprehensive and reliable database in the last decade.So the main aim of this review is to document, evaluate, and analyze the data (last decade) on the levels of different pesticide residues in vegetables (tomato, eggplant, beans, gourds, cauliflower, cabbage, cucumber, potato, carrot, onion, red chilli, red amaranth, lady's finger, spinach, coriander, and lettuce) in Bangladesh.It also revealed the vegetable production scenario, major pesticide use, hazard analysis, and the impact of pesticide usage in Bangladesh. Materials and methods This review has been conducted according to the guidelines of systematic reviews followed by Moher et al. [80].Published literature on pesticide residue detection in vegetables was collected from peer-reviewed esteemed journals and online technical and government reports using a systematic approach.The following keywords were used to search the literature: (detection and quantification) or only "detection" or only "quantification", "use of pesticides", "vegetable production", "pesticide residue", "pesticide contamination", "the impact of pesticide usage", "health risk", "Bangladesh" and so on.We carefully examined, downloaded, and evaluated the papers or materials we had searched.In this review, we only considered original research data written in English.In order to fit the subject of interest, the research works underwent extensive revision.The analysis will now focus on 107 articles.A reference management tool called Mendeley was used to maintain the complete articles in PDF format.Hence, we came up with these criteria: (a) Use of pesticides in vegetable production, (b) Pesticide residues in vegetables, (c) Levels of contamination, (d) Associated health risks, (e) Impacts on human, animal, and environment. Selection and analysis At first, a total of 423 articles that primarily fit the area of interest were selected.However, after careful evaluation, it was observed that among the primarily selected articles, 199 were not research articles, not accessible, and didn't meet the criteria.Therefore, upon further assessment, they were excluded from the records.There were 224 publications in total that contained original research data, of which 09 were not written in English and were excluded from the list.Out of 215 papers, 105 were not taken into consideration for this study because they lacked sufficient information on our selection criteria.The remaining 110 were chosen as the relevant study resources for the review (Fig. 3). Vegetable production scenario in Bangladesh Vegetable production in Bangladesh is increasing rapidly.In the last decades, the country grew vegetables on 9.98 lakh acres of land to produce 29.93 lakh tonnes of vegetables [34].In Bangladesh, vegetables are grown on only 2.63 per cent of cultivable land [21].Although a small portion of cultivable land is being used for vegetable cultivation, its production has seen a 37.63% significant rise in the last decades.The Department of Agricultural Extension (DAE) estimates that during the 2018-19 fiscal year, Bangladesh produced over 26.7 million tons of vegetables on around 1.25 million hectares of land.DAE [33].At present, more than 60 different types of vegetables of indigenous and exotic origin are grown in various regions throughout the year [32].Based on the cultivating season, vegetables are categorized into summer/rainy season vegetables, winter season vegetables, and all-season vegetables.Summer/rainy season vegetables are grown from May through October during the monsoon season.However, winter vegetables are cultivated for a short period between November and April.In the winter season, about two-thirds of the total vegetables are produced [88].In Bangladesh, 60-70% of vegetables are grown in the winter, and most areas have a marketable excess during that time [107].While the daily recommended amount of vegetables for Bangladesh is 250 g, the average daily intake per person is only 56 g [45].Thus, for this high consumer demand, farmers are getting more involved in vegetable production along with rice cultivation. Nowadays, farmers are practising intensive agricultural farming to produce more vegetables. In total cultivable land for vegetable production, brinjal grows in 12% of the land, tomato in 7% of the land, pumpkin in 7% of the land, radish in 6% of the land, arum in 5% of the land, beans in 5% of the land, cauliflower in 5% of the land, water gourd in 4% of the land, bitter gourd in 4% of the land, cabbage in 4% of the land, point gourd in 2% of the land, other vegetables (potato, spinach, carrot, cucumber, red amaranth, onion, okra and so on) in 39% of the land (Fig. 1). vegetables were tomato, cabbage, cauliflower, bean, gourd, radish, carrot, red amaranth, and eggplant.At the same time, the major summer vegetables were pumpkin, okra, cucumber, bitter gourd and so on [22].In summer, the vegetable was produced on 524 acres of land, and the total production was 1871 tons, whereas in winter, vegetables were produced on 547 acres of land, and the total production was 2465 tons.The area-wise (acre) individual vegetable production (tons) in Bangladesh is shown in (Fig. 2). Obstacles encountered in vegetable production in Bangladesh Bangladesh is a tropical country, and the environment of this country is favourable for many insects, pests, bacteria, fungi, and unwanted plant development.Many tropical regions receive heavy rainfall annually, adding to many vegetable diseases [1].Rain, heavy dews, high temperatures, and dry climates (primarily for insect infection, which is influenced by rain) have been identified as key factors encouraging pest establishment [70].Insect pests directly damage vegetable production or act as vectors for several viral diseases.Insects distort leaves, stunting growth and killing young plants.The edible roots of plants are damaged by larvae (caterpillars).Adults and larvae eat plant sap, which makes white spots on the leaves; plants infected with it may wilt or die [47].Thus, farmers are constantly faced with many difficulties while cultivating vegetables [1].The consequences of climate change, such as global warming, temperature changes, and biotic and abiotic factors, may hinder vegetable cultivation [18].Climate change hinders vegetable production by retarded growth, unable to seed germination, unable to adjust to high/low temperatures and making them vulnerable to insect pests and disease attacks.Now, the main problems of vegetable cultivation are-increasing insect and pest attacks [92], disease problems [52], climate changes [89], drought, salinity and so on [51].Farmers use pesticides to protect vegetables from insects, pests, and disease attacks and improve production and aesthetic value. Pesticide usage in Bangladesh The usage of chemical inputs such as pesticides has risen to boost agricultural production and productivity in Bangladesh.Pesticides are routinely employed in vegetables and other crops or plants due to their vulnerability to insects and disease attacks [76,104]. In Bangladesh, estimates showed that 25% of vegetables in the country were lost annually because of pest infestation [79].Although the usage of pesticides began in 1951, it was moderate until the 1960 s. It was observed that 84 active chemicals with various formulations and 242 trade names of pesticides were registered for crop and vegetable protection in our country [17], which indicated the tremendous surge in use. Bangladeshi farmers used insecticides along with a small number of herbicides, fungicides, acaricides, and rodenticides in the form of granules, liquid, and powder for vegetable production [48].Carbamates were used up to 64% of the crop-producing region, whereas organophosphates were used up to 35% of the crop-producing area [27].Since 1990, organophosphorus pesticides have been the preferred group of pesticides for vegetable production in Bangladesh, as organochlorine insecticides were banned due to their persistence and severe toxic effects on the environment.About 77% of farmers used pesticides at least once (37% applied once, 31% applied twice, and the rest applied 3-5 times) in a crop.Farmers also sprayed these vegetables 17-150 times throughout each growing cycle [13].According to the Department of Agriculture Extension, around 95% of farmers didn't wait for the pre-harvest interval (PHI) following pesticide application [32].Furthermore, several pesticides used in Bangladesh were prohibited or restricted worldwide [83,99].Most farmers apply pesticides without understanding their actual requirements or efficacy, resulting in high pesticide application frequencies in Bangladesh [20].Because of their ignorance and unconsciousness about pesticide use, more than 90% of pesticides are used unnecessarily, indiscriminately, and excessively [32].Farmers prefer pesticides over fertilizers as they keep insect pests Fig. 2. Figure showing the area-wise (acre) individual vegetable production (tons) in Bangladesh [22]. in check while ensuring better production than fertilizers.The price of the pesticide is also a major factor in this case.At the same time, pesticide use is higher in underdeveloped regions than in developed ones, as people are more prone to grow organic vegetables in developed regions.Thus the indiscriminate use of pesticide lead to residues in the vegetables, and thus, fresh vegetables get contaminated with hazardous pesticides, and food security has become a significant public health concern [108]. Farmers in this country used different types of pesticides (Tables 1-8).The widely used pesticides were organophosphorus, pyrethroids, carbamate, organochlorine, nereistoxin analogue group, neonicotinoids and so on.More specifically, chlorpyrifos, dimethoate, diazinon, and malathion were the most used pesticides.In Bangladesh, Organophosphorus (OPs) is the most widely used pesticide for controlling insects and mites on vegetables.They are very functional and have a broad spectrum of activity [19].They were invented in the early 19th century, but their effects on insects, similar to humans, were discovered in the 1932 s [87].Since 1990, organophosphorus pesticides have been widely used in Bangladesh.In Bangladesh, 35% of the crop-producing area is treated with organophosphates [28].Through ingestion and contact, humans are generally exposed to organophosphorus [19].The most common OPs detected were chlorpyrifos, diazinon, malathion, dimethoate, parathion, fenitrothion, phenthoate, acephate, quinalphos, phenthoate, parathion, pimethoate, phosphamidon, pirimiphos-methyl, and dichlorvos (Tables 1-8). Carbamate pesticides have low mammalian toxicity, rapid disappearance, and a broad spectrum of activity [54].They interfere with the transmission of nerve signals by blocking the acetylcholinesterase enzyme resulting death of the pest by paralyzing it [110].The most common carbamate pesticides used by farmers were carbaryl, carbosulfan, carbofuran, pirimicarb and so on. Due to adverse health and environmental effects, many organochlorines were banned, such as DDT, chlordane, toxaphene, and so on [65].So, the farmers of our country used endosulfan and dicofol for vegetable production.Pyrethroid pesticides were highly toxic to insects and fish [110].They affect the central nervous system by causing changes in the dynamics of the Na+ channels in the nerve cell membrane.These changes caused neuronal hyperexcitation [86].Cypermethrin, permethrin, allethrin, bifenthrin, and deltamethrin were common examples.Neonicotinoids disrupted a specific neurological pathway in insects and were frequently used by farmers for vegetable production [31].Imidacloprid was the most common neonicotinoid in use.Nereistoxin was a naturally occurring insecticide that blocked the nicotinic acetylcholine receptor in the insect body.Cartap was the most commonly used nereistoxin. Different analytical methods were used to know the level of pesticide residues in contaminated vegetables.The most commonly used techniques were gas chromatography coupled with mass spectrometry (GC-MS), flame thermionized detector ( FTD), flame ionization detector (FID), thermal conductivity detector (TCD), electron capture detector (ECD), flame photometric detector (FPD) and high-performance liquid chromatography (HPLC).The extraction method used either solid-phase extraction or liquid-liquid extraction (Tables 1-8). It was observed from a study that pesticides were commonly found in 8 types (eggplant, tomato, cauliflower, cabbage, potato, cucumber, production in this country.The study also suggested regular monitoring of the pesticide levels in vegetable production and proper education for farmers regarding the potential risks and safe use of pesticides [29]. A study conducted in the Bogura district revealed that the farmers indiscriminately used carbamate and organophosphorus pesticides in vegetable production (eggplant, tomato, & cucumber).The study also revealed that the farmers were not aware of the effects of pesticides on humans, animals, and the environment [58]. A scientific study found that different types of vegetable samples (eggplant, yard long bean, bitter gourd, snake gourd, pointed gourd, okra, tomato, hyacinth bean and cabbage) from all over Bangladesh were heavily contaminated with pesticide residue.The analysis was performed by GC-FTD and GC-ECD.The results showed that 21.78% of samples were contaminated with chlorpyriphos, quinalphos, acephate and cypermethrin residue, either as single or multiple residues.Moreover, 18.26% of samples had residue above MRL [3].Another study showed similar results in cauliflower samples.About 13.33% of the total samples had 6 insecticide residues (cypermethrin, quinalphos, diazinon, malathion, fenitrothion and acephate), exceeding the MRL irrespective of single or multiple residues.The presence of the highest residue levels of insecticides in cauliflowers may be due to their irrational and repeated use before harvest [4].Another scientific study unfolded the fact that raw salad (lettuce and coriander) in Dhaka city was heavily contaminated with pesticide residue.The author also concluded with a suggestion that continuous monitoring of pesticide residue would be needed on a large scale, and pesticide dealers/retailers and vegetable growers should be given training on the safe use and handling of pesticides [5,6]. A survey was conducted in the Narsingdi district of Bangladesh regarding pesticides used in vegetable production like eggplant, cauliflower, and country bean.The study found that diazinon, malathion, quinalphos, fenitrothion, cypermethrin, fenvalerate and propiconazole were the most commonly used pesticide.Among the collected samples, 64% were contaminated with pesticide residue, and 52% exceeded the maximum residue limit (MRL) [62].Another study concluded that malathion, cypermethrin, chlorpyrifos, and cyhalothrin residues were present in eggplant, cauliflower, lady's finger, and bean samples.The detected pesticide residue also exceeded the MRL in some samples [82].A study in Savar, Dhaka, concluded that bitter gourd samples were contaminated with acephate, dimethoate, fenitrothion, chlorpyrifos, quinalphos, diazinon and malathion pesticides [63].On the other hand, Islam et al., [61] reported that imidachloropid was present in spinach samples at the level of 35 ppm collected from Mymensingh Sadar.According to Rahman et al., [90], an effective management plan was needed for stringent regulation and regular monitoring of pesticides in vegetables to educate farmers and consumers about pesticides' detrimental effects on human health.In Bangladesh, pesticide uses is increasing day by day due to there being no updated food regulations, monitoring, law implementation etc. Monitoring of pesticide residues should be legitimized.From the review results, it can be concluded that farmers indiscriminately used pesticides in vegetable production and did not follow the proper withdrawal period. Health risk assessment Health risk estimations were done based on pesticide residues detected in different vegetable samples reviewed in (Tables 1-8).Health risk indices of pesticide residues via dietary intake of vegetables were assessed according to the guidelines recommended by the [40] followed by Bhandari et al., [24].Acute/short-term HQ assessment (aHQ). The aHQ was calculated based on the estimated short-term intake (ESTI) and the acute reference dose (ARfD) as: ESTI = (the highest level of residue x food consumption)/body weight (1) aHQ = (ESTI/ARfD) x 100% Chronic/long-term HQ assessment (cHQ) The cHQ was calculated based on the estimated daily intake (EDI) and the acceptable daily intake (ADI) as: EDI = (mean level of residue x food consumption)/body weight (2) This review was chosen to estimate the dietary risks of pesticide exposure in adolescents and adults.According to the European Food Safety Authority, 32 kg body weight for adolescents and 62 kg for adults were chosen as individual body weights in population groups [41].Based on the final report on the Household Income and Expenditure Survey-2016-2017, the food consumption rate of vegetables in Bangladesh was 166.1 g per capita− 1 day− 1 [56].HQ > 1 denotes a potential risk to human health [35], while an HQ ≤ 1 indicates no risk [25,100]. After health risk assessment, HQ> 1 was observed for carbofuran, diazinon, dimethoate, and phosphamidon in tomato; carbofuran, diazinon, dimethoate, and carbaryl in eggplant; malathion, cypermethrin, cyhalothrin, and dimethoate in beans; dimethoate and malathion in cauliflower; carbofuran in cabbage; imidacloprid in cucumber; malathion and cypermethrin in lady's finger; dimethoate in lettuce and coriander in adolescents and adults (Tables 1a-8a; added to the supplementary material).No health risk was observed in gourds, potato, carrot, onion, red chilli, red amaranth, spinach, and okra for any pesticides as the HQs were below level 1.The highest acute and chronic HQ (aHQ, cHQ) was observed for cypermethrin (bean) in adolescents (aHQ=255, cHQ= 510) and adults (aHQ=131, cHQ=263 (Tables 1a-8a; added to the supplementary material).It indicated the highest risks of dietary exposure through the congestion of these contaminated vegetables. Impact of pesticide usage Pesticides have become an unavoidable part of agricultural and public health practices [96].Despite the benefits, their usage has detrimental environmental and public health consequences.Due to high biological activity and toxicity, pesticides are unique among ecological pollutants.Most pesticides do not distinguish between pests and other life forms.As a result, they can be hazardous to humans, animals, other living organisms, and the environment if handled indiscriminately [110]. Impact on human health Pesticides can enter the human body through inhalation of polluted air, dust, and vapour, oral exposure by consuming contaminated food and water, and dermal exposure by direct contact with pesticides [94].Inhalation and dermal exposure occur during applying pesticides in agricultural fields, forestry, household level, and by occupation [93].Most farmers in Bangladesh did not use masks, gloves, or other protective equipment when spraying pesticides [48].Over 87% of farmers openly admitted spraying pesticides with little or no care, and 92% did not take any measures during usage, storage, or transportation [36].Pesticides could enter the human body during and after application through various routes.Pesticide residues were absorbed at varying rates in different parts of the body, including the scalp (3.7%), forehead (4.2%), ear canal (5.4%), abdomen (2.1%), forearm (1.0%), palm (1.3%), genital region (11.8%), and ball of the foot (1.3%) [77].Oral exposure occurs when pesticide levels in food (mainly vegetables and fruits) and water exceed the MRL limit.Pesticide toxicity was determined by the type of pesticides (very, highly, moderately, and mildly dangerous), the method of exposure (oral, dermal, and inhalation), and the dosage received.It was estimated that pesticides poisoned over 1 million individuals yearly, resulting in 0.2 million deaths.Agricultural workers accounted for half of them, while the remainder was caused by contaminated food and water [110]. Acute effect The acute disease developed within a few days of contact or exposure to the chemical.Acute illness in people is caused by pesticide drift from agricultural fields, pesticide exposure during the application, and intentional or inadvertent poisoning [37,71].Pesticide poisoning causes various symptoms, including headaches, body pains, skin rashes, poor focus, nausea, dizziness, impaired eyesight, cramping, panic attacks, and in severe cases, coma and death [110].Several toxic aspects measures were proposed to minimize the occurrences of acute pesticide poisoning, including limiting pesticide availability, replacing a less toxic but equally effective pesticide, and encouraging personal protective equipment [68,81]. Chronic effect Chronic illnesses in humans are caused by prolonged exposure to sub-lethal pesticide concentrations (years to decades) [49].Symptoms do not appear right away and appear at a later time.Furthermore, when pesticides are sprayed on crops and vegetables, they are brought to the market for sale without maintaining a withdrawal period, and consumers get exposed to pesticide residue [77].Agricultural workers are at a higher risk of infection, but the general public is also in danger [49].As pesticides become an increasingly important element of our ecology, the incidence of chronic illnesses has begun to rise [50].The effects of pesticide usage on human health are shown in (Table 9). Impact on the environment The farmers, institutions, and the general public use and dispose of pesticides extensively, creating many pesticide sources in the environment.Pesticides' range of action is nearly impossible to regulate. Pesticide spreads in the air, gets absorbed in the soil, dissolves in the water, and eventually reaches a much larger region, even when administered in a relatively small area.Pesticides have a variety of outcomes once discharged into the environment.They are sprayed on crops, may travel via the air, and end up in other parts of the ecosystem, such as soil and water [110].Directly applied pesticides might be washed away and reach adjacent surface water bodies by surface runoff, or they might percolate through the soil to lower soil layers and groundwater.It can potentially change soil microbial diversity and biomass, block soil respiration, and result in infertility [39,97]. Pesticide contamination of surface and groundwater is reported all around the world.Surface water and groundwater contain a variety of compounds, including certain pesticides.The mobility of pesticides in water leads to pesticide pollution of water resources [95].Pesticide contamination of groundwater and surface water poses a serious and urgent threat to freshwater and coastal ecosystems around the world.Pesticide contamination directly impacts drinking water quality in local areas and indirectly affects the soil and food chain.Pesticide residues in water endanger biological communities, including humans.Pesticides have an influence not just on fish but also on aquatic ecology [9].Pesticide contamination in the air has a significant pollutant with dangerous consequences for flora, fauna, and human health [73]. Impact on nontarget organisms Pesticides harm nontarget creatures such as earthworms, natural predators, and pollinators, in addition to the target organisms [105].It causes a decrease in the population of earthworms, which reduces soil respiration and leads to infertility.Insecticides are particularly harmful to some predators, such as parasitoids (important in pest control).The eradication of these natural predators had the potential to increase pest problems.Wild pollinators such as bees, fruit flies, beetles, and birds are harmed by pesticides.As a result, it resulted in indirect agricultural and vegetable output losses [106]. Discussion The use of pesticides in vegetable production is increasing day by day in Bangladesh.Due to increased demand for vegetables, the frequency and quantity of pesticide use have increased over the past years in tomato, eggplant, cucumber, bitter gourd, beans, cauliflower, cabbage and okra production.There has been a trend to pesticide use in vegetable production as it gives better output in a cost-effective way.This leads to higher contamination of vegetables with pesticide residues and causes many health problems for consumers.The problems are not limited to Bangladesh, and it is a pressing issue on a global scale.It has been reported in Pakistan, India, and China that pesticides are used indiscriminately in vegetable production and cause many health problems to consumers [103,12,74]. In Bangladesh, different types of pesticides have been used for vegetable production throughout the decades, including OPs, carbamates, organochlorine, neonicotinoids, pyrethroids, etc.The review highlighted that a single vegetable could be contaminated with single or multi-residue pesticides, and most of the pesticides had residue above the FAO permissible limits.These findings indicated that farmers used a different type of pesticide for single-commodity production without knowing the consequences.Moreover, pesticide use trends were excessive in vegetable production throughout the decade without maintaining a proper withdrawal period before being marketed.The condition led to pesticide residues in the vegetable at the time of marketing, posing a threat to human health, such as cancer, kidney failure, heart attack, etc.The studies from neighbouring countries like China, Pakistan, and India showed similar findings [59,69,7].From these findings, it has to be said that developing countries like China, Pakistan, India, and Bangladesh use pesticides in vegetable production.As for the developed country, it has been reported that America used pesticides judiciously [75]. Farmers had very shallow knowledge regarding the safe use and handling of pesticides with withdrawal periods.There was no monitoring system for pesticide purchase, use, or handling in vegetable production.Also, there are no complementing health and long-term monitoring systems for farmers and farm workers, nor were there any legislation or policy measures for farming's chemical application methods in Bangladesh.Thus, pesticide residues in vegetables have become a major public health concern for both consumers and governments. Nonetheless, the areas in this review were the priority areas for effectively monitoring different pesticides in commodities on a routine basis.However, this study only focused on the data from the last decade and couldn't include data from past decades.So, mass data accumulation should be needed for comparative analysis and to alert the scientific community to further research on this vulnerable issue.Therefore steps should be taken to educate the farmers on the safe use of pesticides to reduce contamination of the foodstuff and environment.These should be done with the help of the government, agrochemical industries and NGOs.Furthermore, an effective campaign should be organized to educate the farmers about the routes by which contaminants enter the body and food and the importance of existing food safety laws. Conclusions and recommendations Pesticides are indiscriminately used in vegetable production in Bangladesh.Different types of pesticides are applied in Bangladesh, like other developing countries and cause many health hazards to the consumers.Nevertheless, there are no long-term monitoring programs applicable regarding this vulnerable issue.As a result of this review, it is clear that the data on regular monitoring, exposure and poisoning are insufficient.Therefore, a long-term regular monitoring program for pesticides applied in vegetable production and marketing is urgently needed with policy intervention. It is nearly impossible to produce vegetables without applying pesticides.As a result, pesticides have now become a part of our food chain.However, the application of this toxic substance is hazardous for us.So, the review suggested ways to alternate pesticide use in vegetable production. Table 9 Effects of pesticide usage on human health [109,110,16,26,42,72,98]. 1.The most basic technique to prevent pesticide application is the integrated pest management (IPM) technique.It involves determining the pest threshold in the field, identifying the pests, and determining whether or not they are damaging.Targeted spraying should be used instead of broad-spectrum spraying if there is a pressing need [44]. 2. Pesticides are loosely adhered to the surface or enter the layers of the vegetables; by washing and peeling, we can mitigate the load of pesticide contamination [78]. 3. Use alternative ways to reduce pest infestation in vegetables such as neem.Many studies suggest that neem has a very effective way of handling pests [60]. 4. All forms of media should be used to discourage farmers regarding pesticide use in vegetable production, i.e., radio, television, newspapers, agriculture staff, and schools; the launch of a contest on best management practices (BMP) and integrated pest management (IPM) by agricultural extension staff from all districts. 5. It is now common knowledge that the most important thing we can do is to adopt an organic farming system. 6.The government must implement an effective monitoring system regarding pesticide use, pesticide selling, and pesticide marketing.The law should be implemented; otherwise, the pesticides used in vegetable production will be increased. 7. For effective monitoring, the vegetable samples should be collected regularly, and a routine quantitative test should be performed to determine whether the samples have pesticide residue. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Fig. 3 . Fig. 3. Criteria for selecting and excluding scholarly articles on the use of pesticides in vegetable production. Table 1 Pesticides contamination status of tomatoes reported by different areas of Bangladesh (2010-2022). of Collection Total Sample Contaminated Sample Samples above >MRL Detection technique carrot, and onion) of vegetables (210 samples) collected from several vegetable-growing regions in Bangladesh.The most frequently detected pesticides were chlorpyrifos, carbofuran, diazinon, carbaryl, malathion, endosulfan, cypermethrin, and dimethoate.Pesticide residues were detected in 51.30% of the total samples.Some samples contained multiple residues (10.47%), and 38.89% of samples had levels above the MRLs.The study indicated the overuse of pesticides in vegetable Table 2 Pesticides contamination status of eggplant reported by different areas of Bangladesh (2010-2022). Table 3 Pesticides contamination status of different types of beans reported by different areas of Bangladesh (2010-2022). Table 4 Pesticides contamination status of different types of gourds reported by different areas of Bangladesh (2010-2022). Table 5 Pesticides contamination status of cauliflower reported by different areas of Bangladesh (2010-2022). Table 6 Pesticides contamination status of cabbage reported by different areas of Bangladesh (2010-2022). Table 7 Pesticides contamination status of cucumber reported by different areas of Bangladesh (2010-2022). Table 8 Pesticides contamination status of miscellaneous vegetables reported by different areas of Bangladesh (2010-2022).
2023-09-11T15:08:31.502Z
2023-09-09T00:00:00.000
{ "year": 2023, "sha1": "d870b48067526f039d783f71d390e3ad3a32b6fd", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "d5f0425acb2fe390738b4c475a02e9bab38c1fd8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
7417269
pes2o/s2orc
v3-fos-license
The cutaneous vascular system in chronic skin inflammation The blood and lymphatic vasculature play an important role in skin homeostasis. Angiogenesis and lymphangiogenesis – the growth of new vessels from existing ones - have received tremendous interest because of their role in promoting cancer spread. However, there is increasing evidence that both vessel types also play a major role in acute and chronic inflammatory disorders. Vessels change their phenotype in inflammation (vascular remodeling). In inflamed skin, vascular remodeling consists of a hyperpermeable, enlarged network of vessels with increased blood flow, and influx of inflammatory cells. During chronic inflammation, the activated endothelium expresses adhesion molecules, cytokines, and other molecules that lead to leukocyte rolling, attachment and migration into the skin. Recent studies reveal that inhibition of blood vessel activation exerts potent anti-inflammatory properties. Thus, anti-angiogenic drugs might be used to treat inflammatory conditions. In particular, topical application of anti-angiogenic drugs might be ideally suited to circumvent the adverse effects of systemic therapy with angiogenesis inhibitors. Our recent results indicate that stimulation of lymphatic vessel growth and function unexpectedly represents a novel approach for treating chronic inflammatory disorders. INTRODUCTION Inflammation is one of the body's major defense mechanisms against pathological insults such as infection, physical or chemical injury. Acute inflammation is terminated by well understood mechanisms restoring homeostasis. In contrast, chronic inflammatory diseases are self-perpetuating conditions which often result in a generalized systemic inflammation affecting several different organs. Blood and lymphatic vessels play pivotal roles under physiological conditions: the cardiovascular network is the first organ system to develop. Its major functions include the supply of oxygen and nutrients, and the disposal of metabolic waste products. In the adult, physiological angiogenesis is indispensable for the normal wound healing process, the menstrual and hair cycle, the response to ischemia and for endometrial growth (Carmeliet, 2003). The lymphatic vasculature is involved in intestinal fat absorption and immune surveillance, and it drains excess tissue fluid back to the blood circulation. The formation of new capillaries from preexisting vessels -angiogenesis and lymphangiogenesis -has received tremendous interest, mainly because of the presumed role in enhancing tumor progression and metastasis (Carmeliet, 2003;Hirakawa et al., 2005b;Karpanen and Alitalo, 2008;Mumprecht and Detmar, 2009). However, vascular remodeling is also a hallmark of many inflammatory diseases including chronic airway inflammation, rheumatoid arthritis, inflammatory bowel disease, atherosclerosis, and the chronic inflammatory skin disease psoriasis (Bainbridge et al., 2006;Baluk et al., 2005;Danese et al., 2006;Detmar et al., 1994). In these conditions, levels of the angiogenic growth factor vascular endothelial growth factor (VEGF)-A are elevated in the inflamed tissue (Detmar et al., 1994;Kanazawa et al., 2001;Koch et al., 1994). Interestingly, the main vascular changes during inflammation consist of vascular enlargement, whereas tumor growth is mainly associated with sprouting angiogenesis. However, vascular hyperpermeability and endothelial cell proliferation are common to both types of angiogenesis. The effect of blocking VEGF-A and angiogenesis is extensively investigated in human cancers but warrants further investigation in inflammatory processes. CONDITIONS The cutaneous blood vascular architecture consists of a lower and an upper horizontal plexus. The capillary loops extend from the latter (Braverman, 1989). The lymphatic vessels of the skin also form two plexuses in vicinity of the blood vascular plexuses. Branches from the superficial lymphatic vessel plexus protrude into the dermal papillae and drain into larger lymphatic vessels in the lower dermis and the superficial zone of the subcutaneous tissue. For more details regarding the cutaneous vessel anatomy please see (Skobe and Detmar, 2000). The structure of blood vascular endothelial cells varies with their anatomical location (Aird, 2007). The resting cutaneous blood vessels contain a continuous monolayer of endothelial cells with a continuous basement membrane ( Figure 1). The blood vascular endothelial cells are covered with pericytes and form tight and adherens junctions. Under non-activated conditions, quiescent endothelial cells do not interact with leukocytes and inhibit coagulation, and there is no major extravasation of blood proteins into the surrounding tissue (Pober and Sessa, 2007). In contrast to blood vascular endothelial cells, the endothelial cells of lymphatic capillaries overlap, lack tight junctions and mural cells, have only a rudimentary or no basement membrane, and are linked to the extracellular matrix by fibrillin-containing anchoring filaments ( Figure 1). Therefore, tissue fluid -containing cells and macromolecules -can directly enter the lymphatic capillaries. The lumen of lymphatic vessels is significantly wider and the wall is thinner than that of blood vessels. The fluid entering the initial lymphatic capillaries is drained to pre-collecting and collecting lymphatic vessels which contain a basement membrane, smooth muscle cells, and backflow-preventing valves (similar to veins). Finally, the fluid is returned to the blood circulation in the jugular region. BLOOD AND LYMPHATIC VESSELS IN INFLAMMATION Blood vessels and, to a lesser extent, lymphatic vessels contribute essentially to the cardinal signs of inflammation: dilated blood vessels with increased flow underlie the "rubor" and "calor"; the excess exsudate caused by hyperpermeable blood vessels exceeding the drainage capacity of fluid by lymphatic vessels results in "tumor". Finally, "dolor" and "functio laesa" are subsequent processes following vascular activation and influx of leukocytes. Activation of the endothelium by inflammatory mediators (such as VEGF-A, TNF-α, IL-6, IL-1β and others) leads to the up-regulation of adhesion molecules such as E-selectin, intercellular adhesion molecule-1 (ICAM-1), and vascular cell-adhesion molecule-1 (VCAM-1), which enables the interaction with leukocytes (Jackson et al., 1997). In chronic inflammatory diseases, the vasculature remains activated, enlarged and hyperpermeable, and it sustains the accumulation of fluid (edema) and cells. Considerable amounts of plasma proteins extravasate from the blood into the tissue during inflammation (Feng et al., 1999). Increased interstitial fluid pressure in inflamed skin leads to the opening of the overlapping lymphatic endothelial cells and to the entry of cell-and macromolecule-rich fluid. The mechanisms controlling the widening of the lymphatic lumen are currently unknown, as is the function of dilated lymphatic vessels. Lymphatic vessels remained dilated in a mouse model of chronic airway inflammation even when inflammation was resolved (Baluk et al., 2005). Therefore, it remains unclear whether the increase in interstitial pressure is the sole driving force of lymphatic vessel dilation. Lymphatic vessels are also a direct source of cytokines and chemokines (Gunn et al., 1998). INFLAMMATORY SKIN DIESEASES WITH VASCULAR INVOLVEMENT A multitude of diseases are linked to an insufficient or overactive vasculature (Carmeliet, 2003). Among them are many inflammatory diseases (Table 1). The inflammatory skin diseases associated with prominent remodeling of the vasculature range from UV damage, bullous pemphigoid, contact dermatits to rosacea and psoriasis (Brown et al., 1995;Gomaa et al., 2007;Kunstfeld et al., 2004;Yano et al., 2002). Vascular remodeling is controlled by pro-and anti-angiogenic mediators. An imbalance leads to vessel growth or regression. Psoriasis Psoriasis is probably the chronic inflammatory skin condition for which changes in the vasculature are best described. The finding that microvascular abnormalities are a characteristic feature, and happen at the onset of psoriasis has been recognized since more than 50 years (Braverman, 1972;Szodoray, 1955;Telner and Fekete, 1961). Already before the epidermal hyperplasia develops, the skin capillaries become tortuous and expanded. The redness of the skin lesions is caused by the close vicinity of the tortous vessels in regions of thinned epithelium. The lymphatic vasculature is also dilated in the superficial dermis, as recognized by electron microscopy, and recently by the detection of specific markers for lymphatic vessels (Braverman, 1972;Kunstfeld et al., 2004). Angiogenesis in psoriasis-It is of interest that the main drivers of angiogenesis in psoriasis are derived from the epidermis (Malhotra et al., 1989). Macrophages and fibroblasts are additional sources of angiogenic factors, including VEGF-A. VEGF-A is probably the most important growth factor leading to blood and lymphatic vascular remodeling in psoriasis, and is currently the best described inducer of inflammation-driven vascular remodeling (Detmar et al., 1994;Ferrara et al., 2003). Additional angiogenic mediators, including hypoxia-inducible factor, TNF-α, IL-1, IL-6, IL-8, IL-17, IL-18, angiopoietins, and many others are involved (Bernardini et al., 2003;Heidenreich et al., 2009). Table 2 summarizes the differential pro-and anti-angiogenic effects of important cytokines and chemokines involved in the pathogenesis of psoriasis. VEGF-A binds to VEGFR-1 and VEGFR-2. VEGFR-1 is expressed on blood vessels, whereas VEGFR-2 is expressed on both blood and lymphatic vessels (Figure 1). VEGFR-1 can be expressed by monocytes / macrophages, whereas VEGFR-2 is expressed at least by a subset of T-cells (Edelbauer et al., 2010;Sawano et al., 2001). Hence, VEGF-A can directly lead to blood and lymphatic vessel activation, and directly affects the attraction of inflammatory cells. The receptor tyrosine kinase VEGFR-2 is thought to be the main mediator of VEGF-A-driven endothelial cell proliferation, differentiation, and sprouting (Adams and Alitalo, 2007). In contrast, the role of VEGFR-1 in the adult organism is less clear. In embryogenesis, VEGFR-1 -which has a higher affinity for VEGF-A than VEGFR-2 but lower kinase activity -likely sequesters VEGF-A to prevent excess signaling and increased angiogenesis through VEGFR-2 (Fong et al., 1995;Hiratsuka et al., 1998). Thus, the remodeling of the vasculature in lesional psoriatic skin might depend on factors derived from the epidermis, whereas blood vascular remodeling is essential for nutrients supply of the overlying, hyperproliferative epidermis. Indeed, epidermal vegf-a −/− mice do not show epidermal hyperplasia after repeated tape stripping (Elias et al., 2008). Targeting both the epidermis and the dermal vasculature might therefore represent a valuable treatment strategy for psoriasis. VEGF-A serum levels correlate positively with disease severity in psoriasis patients, and negatively with the treatment success of standard therapies, implicating a role of VEGF-A in disease maintenance and progression (Bhushan et al., 1999;Mastroianni et al., 2005;Nielsen et al., 2002). Therefore, VEGF-A could serve as a biomarker for psoriasis activity. Besides the morphological changes in the cutaneous vasculature, it has been increasingly recognized that these vessels are activated, and that they have an increased expression of adhesion molecules such as VCAM-1, ICAM-1 and E-selectin (Springer, 1994), sustaining the accumulation of infiltrating inflammatory cells. Angiogenesis in mouse models of inflammation-Many insights into the proinflammatory role of VEGF-A stem from animal models: Homozygous keratin 14 (K14)/ VEGF-A transgenic (Tg) mice -that overexpress mouse VEGF-A 164 in the epidermisspontaneously develop a chronic inflammatory skin disease with many features of human psoriasis at an age of approximately 6 months (Xia et al., 2003). Besides the vascular changes, the homozygous K14-VEGF-A Tg mice also show epidermal hyperplasia, altered keratinocyte differentiation, the typical infiltration of CD11b + and CD4 + cells, the intraepidermal localization of CD8 + T cells, the presence of corneal microabscesses, and the typical Koebner phenomenon (Xia et al., 2003). Importantly, the K14-VEGF-A Tg mice are sensitive to standard anti-psoriatic therapies such as treatment with betamethasone and cyclosporine A, and they develop a Th17-like disease phenotype, similar to human psoriasis (Canavese et al., 2010;Hvid et al., 2008). Interestingly, many current treatment modalities for psoriasis have anti-angiogenic effects, such as targeted phototherapy with a laser, vitamin D3 analogues, TNF-α antagonists, methotrexate, cyclosporine A, and corticosteroids (Avramidis et al.;Canete et al., 2004;Cornell and Stoughton, 1985;Hernandez et al., 2001;Hirata et al., 1989;Oikawa et al., 1990), whereas their effect on the lymphatic vasculature remains elusive. Indeed, the vasoconstrictive potency of corticosteroids was even shown to correlate with clinical activity in psoriasis (Cornell and Stoughton, 1985). In hemizygous K14-VEGF-A Tg mice, chronic inflammatory skin lesions can be induced by delayed-type hypersensitivity reactions (Kunstfeld et al., 2004), and we have previously used this model to discover that topical application of a small molecule inhibitor of VEGF receptor (VEGFR) kinases results in potent anti-inflammatory effects that were subsequently also found in other models of inflammation (Halin et al., 2008). Specific inhibition of VEGF-A also ameliorated psoriasis-like symptoms in a mouse model of psoriasis -where the epidermal specific deletion of c-Jun and JunB leads to the disease (Schonthaler et al., 2009). Besides VEGF-A, another member of the same family of growth factors, namely placental growth factor (PlGF), also plays a major role in cutaneous angiogenesis, inflammation, and edema formation (Oura et al., 2003). K14-PlGF Tg mice are characterized by an increased inflammatory response, with more pronounced vascular enlargement, edema, and inflammatory cell infiltration as compared with wild-type mice. In contrast, mice deficient in PlGF show less inflammation, diminished inflammatory angiogenesis, and edema (Oura et al., 2003). Last but not least, the importance of angiogenesis for inflammation is underscored by the finding that deficiency of the endogenous angiogenesis inhibitor thrombospondin-2 resulted in prolonged and enhanced cutaneous delayed-type hypersensitivity reactions (Lange-Asschenfeldt et al., 2002). Together, these results indicate an important role of angiogenesis and blood vascular activation in sustaining chronic inflammation. In contrast, the role of the lymphatic vasculature in chronic inflammation has remained unclear. Lymphangiogenesis in psoriasis-Lymphatic vessels are the conduit for leukocytes from the site of inflammation to secondary lymphoid organs. The current literature suggests that chemokines expressed by lymphatic vessels (in particular CCL21) lead the way of leukocytes to lymphatic vessels, and that the migration in the interstitium depends on forward flow of polymerizing actin but is integrin independent (Alvarez et al., 2008;Lammermann et al., 2008;Ohl et al., 2004;Pflicke and Sixt, 2009). It has been reported that the lymphatic vasculature plays an active role in corneal and kidney transplant rejection, in part by facilitating dendritic cell transport to draining lymph nodes (Cursiefen et al., 2004;Kerjaschki et al., 2004). On the other hand, specific blockade of VEGFR-3, a receptor for the lymphangiogenic growth factors VEGF-C and VEGF-D, which is mainly expressed on the lymphatic endothelium in the adult (Kaipainen et al., 1995), enhanced the mucosal edema in a mouse model of chronic airway inflammation (Baluk et al., 2005), increased the severity of inflammation in a mouse model of chronic inflammatory arthritis (Guo et al., 2009), and also prolonged the course of inflammatory ear swelling in a mouse model of chronic skin inflammation (Huggenberger et al., 2010). Additionally, the inhibition of VEGF-C/-D by sVEGFR-3 significantly decreased lymph flow in a model of bacterial skin inflammation (Kataru et al., 2009), whereas the genetic overexpression of soluble VEGFR-3 in the skin of mice resulted in a lymphedema-like phenotype (Makinen et al., 2001a). Interestingly, the deficiency of the chemokine receptor D6 in mice -that is expressed on lymphatic vessels, and likely degrades pro-inflammatory chemokines -leads to a chronic inflammatory skin disease resembling human psoriasis after treatment with phorbol esters (Jamieson et al., 2005;Nibbs et al., 2001). Lymphatic vessels also have an increased density in arthritic joints of mice and men, and are further increased after standard infliximab therapy (Polzer et al., 2008;Zhang et al., 2007). In inflamed tissues, the lymphangiogenic growth factors VEGF-C and VEGF-A are secreted by immune cells such as macrophages, and by resident tissue cells such as keratinocytes and fibroblasts. After proteolytic processing of the propeptides, the mature VEGF-C also binds and activates VEGFR-2 which, besides its expression on the blood vascular endothelium, is also expressed on lymphatic vessels (Joukov et al., 1997;Kriehuber et al., 2001;Makinen et al., 2001b;Wirzenius et al., 2007). Inflammation-induced lymphangiogenesis can be directly regulated by VEGF-A/VEGFR-2 and VEGF-C/VEGF-D/VEGFR-3 signaling, and might be modulated by the attraction of inflammatory cells secreting lymphangiogenic factors (Baluk et al., 2005;Kataru et al., 2009;Wuest and Carr, 2010). However, VEGF-Ainduced lymphatic vessels might be less functional than those induced by VEGF-C or VEGF-D/VEGFR-3 signaling Nagy et al., 2002). Recently, it was reported that the inflamed lymphatic endothelium expresses ICAM-1, and that it might directly interact with CD11b expressing dendritic cells, resulting in a reduced capacity of dendritic cells to stimulate T-cell proliferation (Podgrabinska et al., 2009). These results highlight the active participation of lymphatic endothelial cells in regulating inflammatory processes. We have recently found that the establishment of chronic inflammatory skin lesions is associated with impaired lymphatic function and concomitantly decreased lymph flow using an in vivo near-infrared imaging approach in mice (Huggenberger et al., 2010). More importantly, we found, for the first time, that specific activation of lymphatic vessels by Tg overexpression of VEGF-C or of the VEGFR-3-specific ligand mVEGF-D, as well as the intradermal injection of the VEGFR-3-specific mutant VEGF-C156S protein, inhibited chronic skin inflammation in the K14-VEGF-A Tg mouse model (Huggenberger et al., 2010). The reduction in skin inflammation was accompanied by a decreased inflammatory cell infiltrate and normalized epidermal differentiation. It will be of great interest to see whether the application of VEGF-C and activation of lymphatic vessels also exert antiinflammatory effects in other chronic inflammatory diseases, such as arthritis and inflammatory bowel disease. Inflammation-induced lymphangiogenesis might therefore represent an endogenous counterregulatory mechanism aimed at limiting edema formation and inflammation. Rosacea Besides psoriasis, rosacea is also characterized by pronounced vascular alterations. The potential mechanisms contributing to the pathogenesis of rosacea include innate immunity, reactive oxygen species, UV radiation, microbes, and vascular alterations (Yamasaki and Gallo, 2009). Blood flow is increased and dermal dilation of blood vessels is visible in lesional rosacea skin (Marks and Harcourt-Webster, 1969;Sibenge and Gawkrodger, 1992). VEGF-A levels, angiogenesis, and lymphangiogenesis have been reported to be increased in lesional skin of rosacea patients (Gomaa et al., 2007). This is in line with the clinical flushing episodes and the erythema observed in patients. Interestingly, UV irradiation exacerbates rosacea, likely by stimulating keratinocytes to produce VEGF-A (Brauchle et al., 1996). In contrast, the role of lymphatic vessels in rosacea is currently unknown. A number of patients show skin edema reminiscent of lymphedema, and at the phymous stage, there is a pronounced lymphedema of the skin. Together, these findings implicate an important role of impaired lymphatic function in rosacea pathogenesis. Cutaneous UVB damage A single dose of ultraviolet B (UVB; 290-320 nm) irradiation induces epidermal thickening, dilation, and hyperpermeability of blood vessels, edema, and erythema (Berton et al., 1997;Pearse et al., 1987). UVB irradiation up-regulates several pro-angiogenic molecules, such as basic fibroblast growth factor, interleukin-8, and VEGF-A, whereas the anti-angiogenic proteins such as thrombospondin-1 are down-regulated (Bielenberg et al., 1998;Kramer et al., 1993;Strickland et al., 1997;Yano et al., 2004). The repeated exposure of human skin to UVB radiation results in the degradation of extracellular matrix, increased elastosis, a reduction of dermal blood and lymphatic capillaries, wrinkle formation, and ultimately in an increased risk for epithelial skin cancers (Chung et al., 2002;Kajiya et al., 2007;Kligman, 1979Kligman, , 1989Kripke, 1994). The reduction of blood and lymphatic vessels most likely is the consequence of extracellular matrix degradation that no longer supports the vessel maintenance (Chung and Eun, 2007;Kajiya et al., 2007). Mice that overexpress VEGF-A are more sensitive to UVB irradiation than wild-type mice (Hirakawa et al., 2005a). Conversely, we previously found that systemic blockade of VEGF-A reduces UVB-induced inflammation and vascular enlargement without inhibiting tissue repair (Hirakawa et al., 2005a). In line with these findings, overexpression of the angiogenesis inhibitor thrombospondin-1 in epidermal keratinocytes of Tg mice potently prevented UVB-induced photodamage . These data underscore a damagemediating role of angiogenesis and blood vascular hyperpermeability in UVB-induced skin damage. Importantly, we recently found that chronic UVB exposure of mouse skin results in dilated lymphatic vessels that are leaky . Furthermore, inhibition of the lymphatic endothelium-specific VEGFR-3 by a monoclonal antibody significantly prolonged UVB-induced inflammatory edema formation and cell infiltration , whereas activation of VEGFR-3 by the specific activator VEGF-C156S or mouse VEGF-D reduced edema and inflammation (Huggenberger et al., 2011;Kajiya et al., 2009). While VEGF-A is up-regulated after UVB irradiation, VEGF-C is down-regulated (Kajiya et al., 2009). This finding might explain the increased permeability of blood vessels, and the reduced lymphatic drainage function after UVB exposure. Together, these data indicate that inhibition of blood vessel activation / angiogenesis or stimulation of lymphatic function might represent novel approaches to prevent cutaneous photodamage. CONCLUSIONS AND OUTLOOK There are numerous drugs for the treatment of inflammatory disorders but none of these drugs was intentionally developed to directly modulate the vascular endothelium, although many clinically used therapeutics also target the vasculature. There is now extensive evidence that targeting the activated, remodeled blood vessels might represent a novel and promising therapeutic approach for treating chronic inflammatory diseases -not only of the skin. The status of vascular activation might also be used as a biomarker for the intensity and activity of inflammatory diseases. Importantly, our recent findings indicate that activation of lymphatic vessels might serve as a novel strategy for treating chronic inflammatory disorders such as psoriasis, rosacea, chronic airway inflammation, rheumatoid arthritis, inflammatory bowel disease, atherosclerosis, and others. Figure 1. Schematic overview of the proposed role of blood and lymphatic vessels in chronic skin inflammation Cutaneous blood vessels contain a monolayer of endothelial cells (red) with a continous basement membrane (gray). Pericytes (blue) cover the blood vascular endothelial cells (BEC). In contrast, lymphatic endothelial cells (LEC, green) lack mural cells and have only a rudimentary basement membrane. They are linked to the extracellular matrix via fibrillincontaining anchoring filaments (green). The lumen of lymphatic vessels is significantly wider and the wall is thinner than that of blood vessels. BEC express VEGFR-1 and VEGFR-2, whereas LEC express VEGFR-2 and VEGFR-3. VEGF-A -which binds both
2016-05-12T22:15:10.714Z
2011-08-26T00:00:00.000
{ "year": 2011, "sha1": "156bd535a5c76c522bb51635118db5458f293814", "oa_license": "elsevier-specific: oa user license", "oa_url": "http://www.jidsponline.org/article/S0022202X15526882/pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "156bd535a5c76c522bb51635118db5458f293814", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
16684637
pes2o/s2orc
v3-fos-license
16S rRNA and Omp31 Gene Based Molecular Characterization of Field Strains of B. melitensis from Aborted Foetus of Goats in India Brucellosis is a reemerging infectious zoonotic disease of worldwide importance. In human, it is mainly caused by Brucella melitensis, a natural pathogen for goats. In India, a large number of goats are reared in semi-intensive to intensive system within the close vicinity of human being. At present, there is no vaccination and control strategy for caprine brucellosis in the country. Thus, to formulate an effective control strategy, the status of etiological agent is essential. To cope up with these, the present study was conducted to isolate and identify the prevalent Brucella species in caprine brucellosis in India. The 30 samples (fetal membrane, fetal stomach content and vaginal swabs) collected throughout India from the aborted fetus of goats revealed the isolation of 05 isolates all belonging to Brucella melitensis biovars 3. All the isolates produced amplification products of 1412 and 720 bp in polymerase chain reaction with genus and species specific 16S rRNA and omp31 gene based primers, respectively. Moreover, the amplification of omp31 gene in all the isolates confirmed the presence of immuno dominant outer membrane protein (31 kDa omp) in all the field isolates of B. melitensis in aborted foetus of goats in India. These findings can support the development of omp31 based specific serodiagnostic test as well as vaccine for the control of caprine brucellosis in India. Introduction Brucellosis is an infectious zoonotic disease of worldwide importance in both animals and humans [1,2] caused by microorganisms belonging to the genus Brucella, Gramnegative facultative intracellular bacteria [3][4][5]. It is a bacterial zoonosis of worldwide importance, and of major public health and economic significance [4,6,7]. There are few different species of Brucella, each with slightly different host specificity. Six species of Brucella have been identified: B. melitensis, B. suis, B. abortus, B. ovis, B. neotomae, and B. canis [8]. This classification is based on the animal host specificity, susceptibility to dyes, metabolic patterns, phage typing, and serological testing [9][10][11][12]. B. melitensis uses the sheep and goats as its preferred natural hosts but other animals and human being may also be infected [13,14]. Other species like B. abortus, B. suis, B. ovis, and B. neotomae mainly infect cattle, pigs, sheep, and rodents. Recently, new species were discovered: in marine mammals (B. pinnipedialis and B. cetacea), in the common vole Microtus tusarvalis (B. microti), and even in a breast implant (B. inopinata) [2]. Caprine brucellosis causes serious economic losses by way of abortions and stillbirths, besides being potentially hazardous to the animal handlers. Infected parturitions (normal birth or abortion) and infected males play important roles in the spread of infection in herds [2,3,13,14]. Control of infection is necessary not only to reduce economic of losses but also to avoid contamination in man [15]. In India, 13.4% 2 The Scientific World Journal of kids are expected to be lost due to Brucella originated abortions and stillbirth in semi-intensively managed goat herds [16]. Because of serious economic importance and medical consequences of brucellosis, especially in developing countries [1,17], efforts have been made to prevent and control the disease through the use of vaccines [2,18]. The continued improvement of vaccines against B. melitensis is important for the control and eradication of the disease in sheep, goats, and human beings [18][19][20]. For that, isolation and characterization of the existing species is not only essential but also a key to the success in the form of diagnostic test or vaccine [3,7,[20][21][22]. Thus, to establish the etiological agent of caprine brucellosis and to determine the presence of biotypes of Brucella spp. in caprine abortion cases in India, isolation and identification of causing agent is preliminary and essential step. As unequivocal diagnosis is by bacteriological identifications of the causative agent [23] and for the confirmation of brucellosis, isolation is still a gold standard test either for the screening of the infection or preparing eradication programs [24]. Moreover, for further confirmation of Brucella species, various molecular methods have been developed [25][26][27] and most of them are based on the detection of omp31 gene in B. melitensis [28]. These outer membrane proteins (Omps) have been isolated and characterized from several species of Brucella initially for the development of subcellular vaccines [25][26][27][28][29]. Brucella abortus strains contain two major Omps designated as omp25 (25-27 kDa) and omp2 or porin (36)(37)(38) [25,26]. Similarly, B. melitensis contains two Omps with apparent molecular masses of 25-27 kDa and 31-34 kDa, now designated as omp31 [28] and 28 kDa, designated as omp28 [27]. The omp31 gene of B. melitensis 16 M has been cloned and expressed on the surface of E. coli [28] and was shown to protect mice model and natural host against a B. ovis challenge [30,31]. Thus, there is an increasing interest worldwide on cloning and molecular characterization of omp31 gene from different strains of B. melitensis with the ultimate goal of suitable, safe, and effective vaccine and development of B. melitensis specific diagnostic test. Hence, the present study was planned to know the involvement of Brucella species and biovars with molecular characterization of omp31 gene encoding an immuno dominant outer membrane protein (31 kDa omp) from field strain of B. melitensis in aborted foetus of goats in India. Samples. Thirty samples collected from the aborted goats and fetus just after abortion (fetal membrane, fetal stomach content, and vaginal swabs) aseptically were subjected to isolation of bacteria and its molecular characterization through PCR. Isolation and Identification of Brucella. For the isolation of Brucella, material from different sources was inoculated on sterile plates of Brucella selective agar media with hemin and vitamin k 1 media (Hi Media) and incubated at 37 ∘ C for 48 h. The plates were observed at every 24 h for the development of growth. After the growth, the colonies suspected for Brucella on the basis cultural characteristics [23] were picked up and streaked to another Brucella selective agar with hemin and vitamin k 1 plates and incubated at 37 ∘ C for 2 days to obtain pure culture. Cultural Characterization of Isolates. The pure cultures of the isolates examined by morphological examination were inoculated on Brucella selective agar medium, MacConkey Lactose agar (MLA) and Sheep blood agar [10]. The isolates showing characteristic colonies on Brucella selective agar medium, no growth on MacConkey Lactose agar (MLA) and nonhemolytic colonies on blood agar were maintained in Serum dextrose agar for further studies. Morphological Characterization of Isolates. The isolates suspected for Brucella were subjected to Gram staining and Stamp's modified Ziehl-Neelsen (MZN) staining [23] for checking the purity of cultures and morphological characters. Stamp modified Ziehl-Neelsen staining method was performed with 0.4% basic fuchsin solution, followed by rapid decolouration with 0.5% acetic acid solution, and counterstaining with 1% methylene blue or malachite green solution. The smears were examined microscopically with an oil-immersion objective lens (×100). Biochemical Confirmation of Isolates. Pure suspected Brucella isolates, maintained in Serum dextrose agar, were analysed for their biochemical profiles for the differentiation of Brucella species on the basis of biochemical tests, namely, catalase, oxidase, urea hydrolysis, nitrate reduction tests, indole production, citrate utilization, methyl red and vogesproskauer tests as per the standard methods [23,32]. Biotyping of Brucella Isolates. Cultures showing typical Brucella characteristics were subjected to biotyping techniques such as H 2 S production, growth in the presence of thionin and basic fuchsin (10-40 g/mL) dye incorporated into Tryptic soya agar at different concentrations (1 : 25,000, 1 : 50,000, and 1 : 100,000) from 0.1% stock solution (with distilled water), and CO 2 requirement immediately after the primary isolation as well-described method [33]. Lead acetate strips were used to identify the production of H 2 S during growth, and the growth test on media containing streptomycin (2.5 g/mL) was performed to discriminate the isolates from vaccine strain Rev1 as per standard procedures [11,12]. Molecular Characterization of Brucella melitensis Isolates. For molecular confirmation of these isolates, amplification of 16S rRNA and omp31 genes was performed by using Taq PCR master mix kit (Qiagen). Extraction of DNA from Colonies. The isolate colonies from Serum dextrose agar were transferred on Brucella selective agar with hemin and vitamin k 1 plates. Then, few colonies were picked and transferred into 2 mL eppendorf tube containing 1 mL of sterile PBS (pH: 7.4). The suspension The Scientific World Journal 3 in PBS was centrifuged at 10,000 rpm for 10 min at 10 ∘ C. The supernatant was discarded and the pellet was used for extraction of DNA. Deoxyribonucleic acid (DNA) was isolated by using mdi kit (Advanced micro device Pvt. Ltd., India). Polymerase Chain Reaction. DNA isolated from bacterial isolate colonies was used for polymerase chain reaction for the amplification of 16S rRNA and omp31 genes for the confirmatory identification of Brucella melitensis by using Taq PCR master mix kit (Qiagen). 16S rRNA gene is specific to the genus Brucella while the omp31 is a species specific gene to the Brucella melitensis [34,35]. For the amplification of 16S rRNA gene primers, earlier described forward primer (5 -AGAGTTTGATCCTGGCTCAG-3 ) and backward primer (5 -ACGGCTACCTTGTTACGACTT-3 ) were used [36]. Similarly, for the amplification of species specific omp31 gene, a set of forward (5 -TGACAGACTTTTTCGCCGAA-3 ) and backward (5 -TATGGATTGCAGCACCG-3 ) primers were applied [28]. The 25 L of PCR reaction was prepared with 12.5 L Taq PCR master mix (2x); 1 L forward primer (10 pmol/ L); 1 L reverse primer (10 pmol/ L); 2 L template DNA, and 8.5 L nuclease free water. The final reaction volume of 25 L for each sample was used in thermal cycler (Techne, TC 4000). The amplification of 16S rRNA gene was conducted with initial denaturation at 95 ∘ C for 5 min, denaturation at 95 ∘ C for 30 sec, annealing at 54 ∘ C for 1.5 min, extension at 72 ∘ C, 1.5 min, and finally the final extension at 72 ∘ C for 10 min. The omp31 gene amplification was performed with initial denaturation at 95 ∘ C for 5 min, denaturation at 95 ∘ C for 1 min, annealing at 58 ∘ C for 1 min, extension at 72 ∘ C for 1 min, and finally the final extension at 72 ∘ C for 10 min. Quantitation and Quality Assessment of DNA of PCR Products by Agarose Gel Electrophoresis. For the electrophoresis of PCR products, 1% agarose gel was prepared in TAE buffer (Bangalore Genei). Ethidium bromide (10 mg/mL) was added to final concentration of 0.5 g/mL and mixed gently prior to casting of gel. The PCR product (8 L) was mixed with 2 L of loading dye in gel apparatus (GeNei, India) and run at 70-80 volt/cm for 40-50 min till the dye reached the half of the gel. The gel was photographed under the UV illuminator (Alpha Innotech). The size of the amplicon was assessed on the basis of comigration of standard DNA ladder of molecular weight in the range of 100-1000 bp and 1000-2000 bp for the amplifications of 16S rRNA and omp31 genes, respectively (Banglore Genei). Results All the aborted materials collected from the cases of abortions were inoculated on Brucella selective agar plates and the isolates producing characteristic, very small, glistening and smooth, round, and pin-point colonies were further transferred on MacConkey Lactose agar (MLA) and Sheep blood agar. The isolates which did not grow on MacConkey agar (MLA) and are to be nonhemolytic on blood agar were examined for morphological characters by Gram and Modified Ziehl-Neelsen (MZN) staining. Microscopic examination of Gram-stained cultures revealed small Gram-negative coccobacilli and, on modified Ziehl-Neelsen (MZN) staining, organisms stained red against a blue background. These isolates were further assessed for the biochemical characters and the isolates were found positive for catalase, oxidase, urea hydrolysis and nitrate reduction tests and negative for indole production, citrate utilization, and methyl red, and vogesproskauer tests were suggestive of Brucella species (Table 1). Thus, on the basis of cultural, morphological, and biochemical characteristics, five isolates were identified as Brucella species. For the conventional diagnosis of Brucella species, all the isolates were differentiated phenotypically into species and partially to biovars using parameters such as CO 2 requirement, H 2 S production, and growth on media plates containing thionin and basic fuchsin (10-40 g/mL) dye incorporated into Tryptic soya agar at three different concentrations (1 : 25,000, 1 : 50,000, and 1 : 100,000). The growth of all the 5 isolates on media with thionin at only 40 g/mL (1 : 25,000) concentration and basic fuchsin at all concentrations suggested these isolates as Brucella melitensis biovar 3 (Table 1). For the confirmation of genus and species when DNA of these isolates were subjected to 16S rRNA and omp31 gene amplification for identification and characterization, an amplified product of about 1412 bp ( Figure 1) and 720 bp (Figure 2) size was found in all the 5 isolates on agarose gel electrophoresis. Discussion All the 5 isolates obtained from the cases of aborted fetus were initially confirmed by the cultural, morphological, and biochemical tests as Brucella species [10,23,32]. These 5 isolates revealed the presence of Brucella organism on Brucella selective agar medium with the development of characteristic colonies similar to the earlier reports [10]. These findings are also in the concurrence to the reports of isolation of Brucella melitensis in 25 cases in the Thrace Region [37]. All the isolates revealed morphological characters similar to previous findings [23] with biochemical tests in concurrence with the findings of other studies [23,32]. For morphological characterization Gram staining and modified Ziehl-Neelsen (MZN) staining [23] and for the differentiation of Brucella species on the basis of biochemical tests, different tests, namely, catalase, oxidase, urea hydrolysis, nitrate reduction tests, indole production, citrate utilization, methyl red, and voges-proskauer tests (Table 1) were applied as per the method recommended earlier [32]. Similar to the earlier reports [32], all the Brucella isolates were found positive for catalase, oxidase, urea hydrolysis, and nitrate reduction tests and negative for indole production, citrate utilization, methyl red, and voges-proskauer tests (Table 1) revealing them to be Brucella species. Thus, on the basis of cultural, morphological, and biochemical characteristics, the organisms were identified as Brucella species [23,32]. The isolates were further differentiated phenotypically into species and partially to biovars using parameters such 4 The Scientific World Journal as CO 2 requirement, H 2 S production, and growth on media plates containing thionin and basic fuchsin dyes at three different concentrations (Table 1) [11,12,23,33]. Accordingly, Brucella species grown on Tryptic soy agar media containing both thionin and basic fuchsin dyes at concentration of 40 g/mL (1 : 25,000), 20 g/mL (1 : 50,000), and 10 g/mL (1 : 100,000) have been taken as Brucella melitensis, whereas isolates with no growth at all concentrations in both the cases (thionin and basic fuchsin) were considered as Brucella melitensis biovars 2 and those grown on media with thionin at only 40 g/mL (1 : 25,000) concentration and basic fuchsin at all concentrations have been considered as Brucella melitensis biovar 3 (Table 1) [11,12,32]. These findings suggested all the isolates as Brucella melitensis biovar 3 and are in agreement with the earlier reports [3,9,10]. However, in earlier reports [37], 25 cases of biotypes 1 and 3 of biotype 2 among 29 B. melitensis isolates were observed. Whereas, in about 78 B. melitensis isolates, collected from various parts of Turkey, 69 and 9 isolates were identified as biotype 3 and biotype 1, respectively [11,12]. Thus, Brucella melitensis biovar 3 is mainly responsible for the clinical form of brucellosis in goats and leads to abortions and other clinical signs. The molecular approaches appeared to be faster and more sensitive than traditional bacteriological tests [8,[38][39][40]. The 16S rRNA component of 30S small subunit of prokaryotic ribosomes contains hyper-variable regions that provide species specific signature sequences useful for bacterial identification, so 16S rRNA gene can be used as the diagnostic target in the PCR for confirmatory identification of Brucella melitensis. In this study, we have primarily focused on the applicability of 16S rRNA gene as a rapid confirmatory identification tool for Brucella genus as per the procedure adopted earlier [36]. The extracted DNA was PCR amplified using Brucella genus specific primers [36]. A PCR product of about 1412 bp size of 16S rRNA from all the isolates of B. melitensis was obtained ( Figure 1). It confirmed that all the isolates belong to genus Brucella. The advantage of this method is that results can be obtained within 1 day as compared to 7 days by traditional microbiological testing. Previous work on other bacteria has indicated that differences in 16S rRNA gene sequences may be useful for subtyping or for the differentiation of virulent subtypes from nonvirulent subtypes [41,42]. Low variability in the 16S rRNA locus has been noted as an impediment in using 16S rRNA gene sequencing to discriminate at the species level [43]. In recent studies of other biothreat, select agents have indicated that even subtle differences in the 16S rRNA gene sequence may be used for differentiating and identifying closely related species, which are often cross-reactive in biochemical identification systems commonly used in diagnostic laboratories [42,44]. A multiplex system has been developed that is sensitive for Brucella spp. and is able to differentiate between B. melitensis and B. abortus [45]. However, discrepant results were observed with some B. abortus isolates. So far, none The Scientific World Journal 5 of these assays have been accepted for common use in diagnostic laboratories. Moreover, only a few studies in the literature [13,14,[46][47][48], however, address direct detection of Brucella melitensis in clinical specimens of goat origin. In the present study, a PCR based assay for the rapid and specific laboratory diagnosis of Brucella melitensis directly from tissue and blood using specific primers for the PCR amplification of a 720 bp region on the sequence encoding the 31 kDa immunogenic B. melitensis protein (omp31) [13,14] was applied for the confirmation of Brucella melitensis from genomic DNA with species specific primers [28]. All the isolates produced an amplified product size of about 720 bp ( Figure 2). Thus, all the isolates obtained from the cases of abortion in goats belong to B. melitensis as PCR amplification of omp31 gene (720 bp) from previously extracted genomic DNA using specific oligonucleotide primers [49] confirmed the presence of this gene in B. melitensis and its absence in B. abortus [3,22,[50][51][52][53][54]. These findings are in agreement with others which reported B. melitensis from such cases of sheep and goat abortions [11,12,37]. Moreover, the amplification of omp31 gene confirms presence of immuno dominant outer membrane protein (31 kDa omp) in all the field isolates of B. melitensis in aborted foetus of goats in India. Conclusions Brucella melitensis is mainly responsible for the brucellosis in goats and also transmission of infection to human being. For the control of the Brucella melitensis, effective diagnosis and vaccination are required and all these can only be decided after epidemiological studies including isolation of etiological agents from the clinical cases to establish prevalent species and biovars. A country like India with huge goat population being reared in the close vicinity of human is always on the edge of Brucella zoonoses. In such scenario, the findings of the present study that Brucella melitensis biovars 3 are the most prevalent strain in country with well-established immuno dominant outer membrane protein (31 kDa omp) can be a milestone for the development of effective diagnostic as well as prophylactic agent to eradicate the disease.
2018-04-03T05:37:02.687Z
2013-12-18T00:00:00.000
{ "year": 2013, "sha1": "3f931eae7cea9b73d49fe79af2b53c78b54c92fd", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/tswj/2013/160376.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f9712c57784cdebfe7088ef344578594a85350dd", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
240236159
pes2o/s2orc
v3-fos-license
Association between Serum 25-Hydroxyvitamin D Level and Stroke Risk: An Analysis Based on the National Health and Nutrition Examination Survey Background To analyze the association between serum 25-hydroxyvitamin D level (25(OH)D) and stroke risk based on the National Health and Nutrition Examination Survey (NHANES). Methods Between 2007 and 2018, the baseline information of participants from NHNES was collected. Univariate analysis was used to identify the covariates. Multivariate logistic regression model was used to analyze the association between serum 25(OH)D level and the stroke risk. Results Of the 8,523 participants, there were 310 participants with stroke and 8,213 participants without stroke. The multivariate logistic analysis showed that serum 25(OH)D deficiency (odds ratio (OR): 1.993, 95% confidence intervals (CI): 1.141-3.481, and P = 0.012) was the significant risk factors for stroke. Subgroup analysis showed that non-Hispanic whites with serum 25(OH)D deficiency (OR: 2.501, 95% CI: 1.094-5.720, and P = 0.001) and insufficiency (OR: 1.853, 95% CI: 1.170-2.934, and P = 0.006) were associated with a higher risk of stroke than those with normal 25(OH)D levels. Conclusions Serum 25(OH)D deficiency may be associated with an increased risk of stroke. Introduction Stroke remains the third most common cause of disability and the second most common cause of death worldwide [1]. In the United States, there were an estimated 795,000 new or recurrent stroke with approximately 130,000 deaths due to stroke each year [2]. The prevalence increases with age in both females and males. By 2030, an additional 3,400,000 adults are estimated to have a stroke in the United States, which increased by 20.5% with 2012 [3]. Stroke is associated with modifiable risk factors (hypertension, hyper-glycemia, obesity, hyperlipidemia, and renal dysfunction) and behavioral risk factors (sedentary lifestyle, cigarette smoking, and unhealthy diet) [2,4]. Interestingly, vitamin D (25-hydroxyvitamin D, 25(OH)D), a hormone mainly regulating calcium homeostasis, is found to be associated with the development of various nonskeletal chronic diseases, including stroke [5], cardiovascular disease [6], cancer [7], metabolic disorder [8], autoimmune disease [9], and infectious diseases [10]. In recent years, several studies have been conducted on the association between 25(OH)D level and stroke risk, but the results are inconsistent. Zhou et al. reported that 25(OH)D levels were associated with ischemic stroke (relative risk: 2.45) [11]. Berghout et al. also found a correlation between 25(OH)D level and prevalent stroke (adjusted odds ratio (OR): 1.31), but only extremely low 25(OH)D level was related to incident stroke (hazard ratio (HR): 1.25), showing that low 25(OH)D level may not increase the risk of stroke [12]. There is another evidence suggesting no association between 25(OH)D levels and incidence of stroke (HR: 1.00) [13]. In view of the inconsistent results, in this study, we used a pooled cross-sectional data from the National Health and Nutrition Examination Survey (NHANES) (2007)(2008)(2009)(2010)(2011)(2012)(2013)(2014)(2015)(2016)(2017)(2018) to further analyze the association of serum 25(OH)D level with stroke risk. 2. 1. Data Sources. NHANES, a nationally representative survey for noninstitutionalized civilians in the United States, is conducted in two-year cycles, with approximately 10,000 persons in each cycle [14]. Participants aged over 18 years who had serum 25(OH)D level measured during the survey were enrolled in this study. Pregnant women and participants who did not respond to the question on stroke history and had missing information (e.g., age, sex, marital status, family income, educational level, and condition of complications) were excluded from the study. The data used in this study were accessed from NHANES, a continuous program performed by the National Center for Health Statistics. The approval from the Institutional Review Board of Tianjin Medical University was not required because the data from NHANES were freely available. The stroke was determined based on the Medical Condition Questionnaire (MCQ). Question MCQ160f "Has a doctor or other health professional ever told you that you had a stroke?" was asked by interviews. The participants answered "yes" were deemed to have stroke. Diabetes mellitus was identified through the Diabetes Questionnaire (DIQ). Question DIQ010 is "Other than during pregnancy, have you ever been told by a doctor or health professional that you have diabetes or sugar diabetes?" Participants who answered "yes" were considered as diabetic. The Blood Pressure & Cholesterol Questionnaire (BPQ) question BPQ080 is "Have you ever been told by a doctor or other health professional that your blood cholesterol level was high?" Participants who answered "yes" were considered as with high cholesterol level. Hypertension was determined according to the question BPQ020, "Have you ever been told by a doctor or other health professional that you had hypertension, also called high blood pressure?" Participants who answered "yes" were considered with hypertension. Dietary intake was estimated by two 24-hour dietary recall, a validated Automated Multiple-Pass Method jointly completed by the United States Department of Agriculture (USDA) and the United States Department of Health and Human Services (DHHS) [15]. The specific intake of each nutrient was available in the Dietary Interview-Total Nutrients Intakes. Consumptions of dietary fiber, total fat, fruits, vegetables, vitamin A, vitamin B, vitamin C, and vitamin E were retrieved from the dietary data. 2.2. Measurement of Serum 25(OH)D Level. Serum 25(OH)D level (ng/mL) was thought to be the optimal indicator to assess vitamin D status [16] Behavioural Neurology was conducted for radioimmunoassay kits against highperformance liquid chromatography-purified 25(OH)D. Since 2007, the serum 25(OH)D level was measured using ultrahigh performance liquid chromatography-tandem mass spectrometry. Due to the differences in the results of these two measurement methods, this study only analyzed data from 2007 to 2018. The detailed measurement methods and quality assurance for serum 25(OH)D level could be found in the survey laboratory data [17]. The Institute of Medicine (IOM) and United State Preventive Services Task Force define vitamin D sufficiency as a total 25(OH)D level greater than 50 nmol/L [18,19]. In this study, serum 25ð OHÞD level < 30 nmol/L is thought as deficiency, 30-50 nmol/L as insufficiency, 50-125 nmol/L as the normal value, and >125 nmol/L as adequacy. Statistical Analysis. The SAS software (version 9.4, SAS Institute Inc., NC, USA) was employed to analyze the data. Normally, distributed data were represented as mean ± standard error ðSEÞ and compared using t test, while nonnormal data was presented as median and quartile (M (Q1 and Q3)) and compared using Mann-Whitney U rank-sum test. χ 2 test or Fisher's exact test was used to compare the enumeration data which were described as n (%). In the NHANES 2007-2018 study, exam weight was taken into account. All included data from 2007 to 2018 were not missing. The covariates with significant difference in the univariate analysis were enrolled into the multivariate logistic regression model to analyze the association between serum 25(OH)D levels and the stroke risk. The difference was significant at P < 0:05. Basic Characteristics of Participants. A total of 10,425 participants with serum 25(OH)D data, stroke information, and age ≥ 18 years were retrieved from the NHANES between 2007 and 2018. After excluding 543 participants without dietary intake data and 1,359 participants with other missing information (BMI, marital status, drinking, emphysema, chronic bronchitis, etc.), 8,523 participants were finally eligible for the study. The process of inclusion and exclusion of participants is shown in Figure 1. Of these 8,523 participants, the mean age was 46:96 ± 0:35 years, 4,201 (48.60%) were males, and 4,322 (51.40%) were were lower in the stroke group compared with the no stroke group. In addition, there were statistical differences in the race (χ 2 = 12:177, P = 0:007) and marital status (χ 2 = 25:660, P < 0:001) between the two groups. The serum 25(OH)D level distribution of stroke and no stroke groups is shown in Figure 2, and the results indicated that the proportion of patients with serum 25(OH)D deficiency and insufficiency in the stroke group was higher in the stroke group than that in the no stroke group. Table 3. Except for non-Hispanic whites, the association between serum 25(OH)D level and stroke risk was not statistically significant among Mexican Americans, non-Hispanic Discussion A total of 8,523 eligible participants were involved into the present study, among whom 310 participants were subjected to stroke, while 8,213 participants were not. The multivariate logistic analysis showed that serum 25(OH)D deficiency was the significant risk factor for stroke. All these findings suggested that the patients with serum 25(OH)D deficiency (<30 nmol/L) might have an increased risk of stroke. Subgroup analysis showed that non-Hispanic whites with serum 25(OH)D deficiency and insufficiency were associated with a high risk of stroke. As a fat-soluble vitamin, vitamin D, can affect cardiomyocytes, endothelial cells, vascular smooth muscle cells, and inflammatory cells by binding vitamin D receptors (VDR) and exert the effects of inhibiting myocardial hypertrophy, protecting vascular endothelium, and regulating inflammatory responses, consequently decreasing the onset risk of cardiovascular and cerebrovascular diseases and improving patients' prognosis [20][21][22]. 25(OH)D, a major circulation form of vitamin D in the body, can better reflect vitamin D status. Several studies reported vitamin D deficiency, whether in serum or in intake, may be associated with an increased risk of ischemic stroke [11,23]. In the present study, a pooled cross-sectional data from NHANES were used to identify the association of serum 25(OH)D level with the stroke risk. The risk of stroke was found to significantly increase in patients with serum 25(OH)D deficiency and insufficiency. However, vitamin D status is under the influence of various factors including age, living areas, exposure to sunlight, and vitamin D daily dietary intake [24,25]. After adjustment for multiple covariates, our results still showed that serum 25(OH)D deficiency is related to an increased risk of stroke. The impact of serum 25(OH)D level-related stroke risk between different races is controversial. The study of Judd et al. indicated that lower 25(OH)D level is a significant risk factor for incident stroke, and no statistically significant difference was observed between blacks and whites [26]. However, Michos et al. found that serum 25(OH)D deficiency was associated with a higher risk of stroke in whites but not in blacks (hazard ratios 2.13 vs. 0.93) [27]. Our results showed that a lower serum 25(OH)D level was related to an increased risk of stroke in non-Hispanic whites. Robinson-Cohen et al. also demonstrated that lower serum 25(OH)D level was related to an increased risk of incident coronary heart disease among participants who were Hispanic [28]. In addition, several studies indicated that both low and high serum 25(OH)D levels have increased the risk of stroke [29,30]. However, the relationship between high 25(OH)D levels and stroke risk was not observed in this study. The possible explanation was that the sample size of participants with higher high 25(OH)D levels is not large, and no statistically significant results can be obtained. At present, the effect of 25(OH)D level on the stroke risk can be explained by several mechanisms below. Inappropriate activation of the renin-angiotensin system (RAS) may be Behavioural Neurology a major risk factor for stroke [31]. 25(OH)D, a negative endocrine regulator of the renin-angiotensin system (RAS), may influence the stroke risk through RAS regulation [31,32]. A previous experiment showed that by regulating cholesterol efflux and macrophage polarization through elevated CYP27A1 activation, vitamin D played a protective role against atherosclerosis in hypercholesterolemic swine [33]. Activated vitamin D may defer atherosclerosis by inhibiting the formation of foam cells and the process of macrophage cholesterol absorption, consequently reducing the risk of developing stroke [34]. There is another study showing that vitamin D deficiency contributes to facilitating secondary hyperparathyroidism, while the increased levels of parathyroid hormone may accelerate the inflammatory response in atherosclerosis [35,36]. In addition, activated vitamin D plays a crucial role in preventing thrombosis, which may explain why the low vitamin D level is associated with an increased risk of ischemic stroke [37,38]. The strength of the present study was that it was a largescale, population-based study, providing strong evidence for assessing the relationship between vitamin D status and stroke risk. Compared with the single-center studies, our research results may be more generalizable. Furthermore, we conducted a further analysis of the relationship between serum 25(OH)D level and stroke risk based on race. However, there existed some limitations. First, the data in our study were accessed from the NHNES database, which may be lack of some important variables, such as vitamin D supplementation, exposure to sunlight, living areas, and seasons of vitamin D measurement. Second, the stroke history was confirmed through self-reported data. Despite lack of verification for self-reporting stroke in NHANES, the stroke history from questionnaire was checked. In several studies, the self-reported data from NHANES were used to identify the risk factors for cardiovascular diseases [39][40][41]. Conclusions The results suggested that serum 25(OH)D deficiency (<30 nmol/L) might be related to an increased risk of stroke. In addition, non-Hispanic whites with serum 25(OH)D deficiency (<30 nmol/L) and insufficiency (30-50 nmol/L) were associated with a high risk of stroke than those with normal 25(OH)D levels.
2021-10-31T15:09:46.255Z
2021-10-29T00:00:00.000
{ "year": 2021, "sha1": "d4bc85e66f75c804f40ace94a72ddfe9a968e89c", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1155/2021/5457881", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1f5a3cd684a45dadb09df9061821fe02c7ab03a2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
24482379
pes2o/s2orc
v3-fos-license
Rational experiment design for sequencing-based RNA structure mapping Structure mapping is a classic experimental approach for determining nucleic acid structure that has gained renewed interest in recent years following advances in chemistry, genomics, and informatics. The approach encompasses numerous techniques that use different means to introduce nucleotide-level modifications in a structure-dependent manner. Modifications are assayed via cDNA fragment analysis, using electrophoresis or next-generation sequencing (NGS). The recent advent of NGS has dramatically increased the throughput, multiplexing capacity, and scope of RNA structure mapping assays, thereby opening new possibilities for genome-scale, de novo, and in vivo studies. From an informatics standpoint, NGS is more informative than prior technologies by virtue of delivering direct molecular measurements in the form of digital sequence counts. Motivated by these new capabilities, we introduce a novel model-based in silico approach for quantitative design of large-scale multiplexed NGS structure mapping assays, which takes advantage of the direct and digital nature of NGS readouts. We use it to characterize the relationship between controllable experimental parameters and the precision of mapping measurements. Our results highlight the complexity of these dependencies and shed light on relevant tradeoffs and pitfalls, which can be difficult to discern by intuition alone. We demonstrate our approach by quantitatively assessing the robustness of SHAPE-Seq measurements, obtained by multiplexing SHAPE (selective 2′-hydroxyl acylation analyzed by primer extension) chemistry in conjunction with NGS. We then utilize it to elucidate design considerations in advanced genome-wide approaches for probing the transcriptome, which recently obtained in vivo information using dimethyl sulfate (DMS) chemistry. INTRODUCTION RNA is a versatile molecule, capable of performing an array of functions in the context of diverse cellular processes (Sharp 2009). To a large extent, its functionality is dependent on its ability to fold into, and transition between, highly specific complex structures. Structure analysis is thus fundamental to basic RNA research as well as to large-scale engineering efforts to design novel RNAs for a rapidly growing number of biomedical and synthetic biology applications (Chen et al. 2010(Chen et al. , 2013Mali et al. 2013). However, determining structure from sequence remains a challenge. As a result of several recent technological advances, a family of experimental approaches, collectively called structure mapping assays, is emerging as a powerful technique in structural studies that is complementary to other approaches (Weeks 2010). Structure mapping assays rely on chemicals or enzymes to introduce modifications into an RNA in a structure-dependent fashion (see Fig. 1), so as to glean information about intra-and intermolecular contacts (Weeks 2010). Until recently, sites of modification have been determined by gel or capillary electrophoresis (CE) (Mitra et al. 2008;Karabiber et al. 2013), but these technologies are now being replaced by next-generation sequencing (NGS), thereby allowing probing of a multitude of RNAs in a single experiment (Underwood et al. 2010;Zheng et al. 2010;Mortimer et al. 2012;Silverman et al. 2013;Wan et al. 2013;Ding et al. 2014;Kielpinski and Vinther 2014;Rouskin et al. 2014;Seetin et al. 2014;Siegfried et al. 2014;Talkish et al. 2014). NGS delivers a fundamentally new way of measuring molecular dynamics, namely, via their reduction to the identification and counting of sequences. Once coupled to structural measurements, this "digitalization" has opened up new opportunities for genome-wide structure analysis in vivo (Mortimer et al. 2014) and for massively parallel analysis of RNA libraries in vitro (Qi and Arkin 2014). The coupling of structure mapping to sequencing is conceptually simple (see Fig. 1). First, a library of fragments that terminate at the sites of modification is constructed. Their subsequent sequencing reveals their identities, in contrast to estimation of their length by electrophoresis. In practice, however, performing multiplexed mapping requires a careful balance between the extent of modification that is applied to the RNAs in a sample and the depth of sequencing to be performed to detect modifications. Moreover, the degree of multiplexing and the relative abundances of the RNAs affect the nature of this balance, and therefore, experiment design requires making a series of nontrivial decisions that can greatly affect outcomes. In this study, we perform the first systematic quantitative investigation of the effects of controllable experimental parameters on performance of NGS-based mapping assays via a series of modeling and simulation studies. Our results quantify input-output relationships, elucidate their complexity, and shed light on relevant tradeoffs and pitfalls. Simulations rely on stochastic models of the modification process and fragment generation dynamics. Since NGS readouts are in fact "molecular counters," we are able to directly link an experiment's molecular dynamics to data variation (or quality)-a link that is missing in electrophoresis-based quantification. Recent advances in genomics thus present new opportunities for informatics-assisted design methodology. Our analysis leads to a roadmap for rational experiment design, where quantification by simulations guides parameter optimization rather than intuition or heuristics. The roadmap involves the incorporation of prior structure profiling from small-scale studies, and we have developed an in silico framework that exploits this paradigm to allow for experiment design of large-scale multiplexed experiments as well as for evaluation of data analysis schemes. In what follows, we first devise it and demonstrate its utility in the context of SHAPE (selective 2 ′ -hydroxyl acylation analyzed by primer extension) chemistry (Merino et al. 2005) and its recent multiplexing in conjunction with NGS, dubbed SHAPE-Seq (Mortimer et al. 2012). We then broaden its scope to encompass key features of nascent techniques, which further leverage NGS advances to enable probing of entire transcriptomes with multiple pertinent chemical reagents. As these breakthroughs propel the field into an era of ribonomic big data, we discuss data intricacies and subtleties, with a forward-looking perspective on the role that solid informatics infrastructure can play in accelerating progress. We anticipate this work will provide a quantitative basis for intuition that is needed to guide experimental design, and that it will be of particular use to the many experimentalists that will soon adopt current and forthcoming techniques as sequencing becomes cheaper and as the biochemical assays needed for in vivo and in vitro studies become mainstream. FIGURE 1. Overview of chemical structure mapping followed by next-generation sequencing. Reagent molecules preferentially react with unconstrained nucleotides to modify them. Reverse transcriptase (RT) traverses the RNA and drops off upon encountering the first modification. RT may occasionally drop off prior to the modification in what is termed natural drop off. Sequencing of the resulting cDNA fragments reveals the sites of modification. When RT starts at a single predetermined primer binding site, one can control two parameters: average degree of modification, which depends on reagent concentration and reaction duration, and number of sequenced fragments, which depends on choice of sequencing coverage. Stochasticity in the composition of sequencing readouts arises from randomness in modification patterns, transcription termination events, and fragment sequencing. In silico analysis of large-scale chemical mapping We use a stochastic model of a SHAPE experiment and the sequencing that follows it (see Fig. 1) to generate SHAPE-Seq data in silico for RNA sequences with predetermined SHAPE profiles. The generated data undergo analysis by a method we previously developed (Aviran et al. 2011a), which uses a model and adjoined maximum-likelihood estimation (MLE) algorithm to infer the degrees of chemical modification by the SHAPE reagent at each nucleotide. This corrects for numerous biases, which distort the sought structural information to yield noisy and convoluted measurements of it. See Materials and Methods for experiment, model, and statistical inference details. The primary outcome of data analysis is a set of point estimates that quantify the intensities of reaction between each nucleotide and the SHAPE reagent (see, for example, Fig. 2A). These are called SHAPE reactivities, and they can be used either independently or in conjunction with algorithms to infer RNA structural dynamics (Low and Weeks 2010). The basis for such structural inference is strong correlation between low SHAPE reactivities and nucleotide participation in base-pairing or other tertiary structure interactions (Vicens et al. 2007;Bindewald et al. 2011;Sükösd et al. 2013). In this paper, however, we limit attention to evaluating statistical uncertainty in reactivity estimates, with no further quantification of its subsequent impact on uncertainty in structure prediction. In doing so, we eliminate additional sources of variation which these computational and/or knowledge-based methods inevitably introduce (Eddy 2014) and can thus focus solely on understanding interexperiment variability. We initialized simulations with two sets of values per RNA, determined by previous SHAPE measurements. One set comprised normalized relative SHAPE reactivities (also called normalized SHAPE profile), and the other comprised the propensities of reverse transcriptase (RT) to drop off at each nucleotide in the absence of chemical modification (see Materials and Methods for data and experiment descriptions). Throughout this study, we considered these to be the true inherent structural properties of the RNA, and we kept them fixed. Nonetheless, while a normalized profile is inherent to an RNA, it is not directly measured by a SHAPE experiment. Rather, reaction intensities are being measured, or more precisely, for each nucleotide, one assesses the fraction of molecules in which it is modified-a measure that depends on a tunable reagent concentration and/or reaction duration, which we lump together into a notion of concentration. It is thus a parameter that modulates the reactivity profile that we estimated. To simulate changes in concentration, we defined a hit rate parameter corresponding to the average number of modifications (see Materials and Methods). It captures the overall degree of modification, or the average number of modifications per molecule, also termed hit kinetics. Notably, for a given RNA sequence, the rate could range from small fractions of 1 to >1, depending primarily on the concentration. Since the relative relations between nucleotide reactivities should remain unchanged, we scaled the normalized profile by the hit rate to obtain a true SHAPE profile per given modification condition (see Materials and Methods). A second controllable feature is the total volume of data collected, which is the number of sequencing reads analyzed. It is a function of a chosen sequencing coverage depth and of the amount of RNA subjected to modification and reverse transcription. In simulations, we modified the total number of reads that we generated for the control and the experiment. Since each read provides evidence on a single molecule's fate, this is the effective number of probed molecules. Once the hit rate and reads number are set, stochasticity in measurements arises from the molecules' random fates, as their reverse transcription may abort at different sites due to differences in modification patterns and/or natural dropoff events. The many possible events yield cDNA fragments of varying lengths, thus contributing to variation in the counts of cDNAs of each possible length. It is worth noting that the likelihood that a molecule will give rise to a cDNA fragment of a certain length remains fixed under these settings, as it is fully determined by the reactivity profile and by RT's drop-off properties (see Materials and Methods). In other words, the distribution of fragment lengths is fixed, but finite samples from it display variation in fragment counts. One can think of these theoretical samples as representing technical replicates, i.e., multiple libraries, originating from the same RNA sample and modification experiment, which undergo sequencing and analysis separately. Each simulation thus entailed finite sampling from a precomputed fragment-length distribution. Randomness in sample composition then propagated into variation in sample-based reactivity estimates. The complex relationship between these estimates and the observed fragment counts rendered direct assessment of estimation precision infeasible. It is common practice in such cases to resort to empirical assessment via resampling methods, where one repeats estimation multiple times from subsamples of the original data set. In our in silico study, we evaluated the true precision (under model assumptions) by repeating our workflow sufficiently many times, so as to faithfully reproduce the true distribution of the ML estimate. We utilized this approach to investigate the robustness of measurements and analysis under different experimental conditions. Quantitative assessment of effects of controllable parameters We sought to quantify the variation in reactivity estimates across a range of hit rates and data set sizes. Before we present our findings, we note that the lengths of variation intervals correlate with reactivity magnitudes, and in light of the range of SHAPE reactivities in a typical profile, it is challenging to visualize trends in these intervals across a profile. Instead, we depict the relative standard deviation (RSD) per nucleotide, that is, the ratio of estimated standard deviation (SD) to the true reactivity. We also filter out very small reactivities, as they are prone to zeroing out by our estimation method, which results in very large RSDs. Yet, in the context of an entire profile, these amount to minuscule fluctuations above zero that do not affect data interpretation. Finally, we note that we conducted simulations and observed similar results for several RNA sequences, but for coherence of exposition, in what follows we refer to a SHAPE profile of the P546 domain of the bI3 group I intron. Effects of hit kinetics Conducting structure mapping experiments routinely involves optimizing the reagent concentration. The optimum is often sequence-and system-specific, but a common aim is to balance between the adverse effects of too many and too few modifications (Low and Weeks 2010). This is because in molecules that carry multiple modifications, we detect only the one that is closest to the 3 ′ end (see Fig. 1). The loss of information from the 5 ′ region manifests itself in signal decay, which we correct for during analysis (Aviran et al. 2011a). Yet, high hit rates intensify signal decay and ultimately expedite signal loss, thereby shortening effective probing lengths. They might also introduce analysis-based inaccuracies due to substantial reliance on proper decay correction (Karabiber et al. 2013). Lowering the rate alleviates these concerns, but also decreases the signal-to-noise ratio (SNR), or signal quality, thus impacting analysis accuracy as well. The fact that both scenarios affect measurement precision, but in complex and subtle ways, motivated us to quantify their effects via simulations. Figure 2 shows RSD values computed over a range of rates for the P546 RNA, along with its normalized SHAPE profile, where negligible reactivities were omitted from analysis ( Fig. 2A). For ease of visualization, we divide the rates into three ranges, plotted separately in Figure 2B-D, and group the RSDs per nucleotide per each range group. Note that for comparison purposes, we plot the data for a hit rate of 2 in panels B and C (green bars). Some trends are immediately apparent from this comparison, most notably, a comprehensive increase in RSD with decreasing modification intensity, attributed to degradation in the obtained signal quality. One can also observe a threshold effect, where modification becomes so sparse such that it vastly degrades the quality (see panel D for rate 0.1 in pink). When scanning these trends across the sequence, we also note a change in pattern near the 5 ′ end. Specifically, reducing the rate from 3 to 2 (blue to green in panel B) results in decreased RSD (see sites 1-19), as opposed to increased RSD over the remaining sites. In other words, while increasing reagent concentration beyond two benefits measurements in one portion of the molecule, it trades off with the precision elsewhere. This observation captures the impact of severe signal decay on the SNR, as RT's drop-off process leaves very few fragments that can inform us of modifications at the 5 ′ region. The practical implication of this is that the molecule length for which high-quality data is effective is even shorter than the length for which signal is observed. Nevertheless, if one aims to circumvent the signal decay problem by resorting to very low hit rates, then the obtained signal spans longer stretches and exhibits little decay, but it also results in overall poor quality (e.g., see high RSDs in the 5 ′ region in Fig. 2D). Importantly, when rates are low, the counts of fragments mapping to the 5 ′ region may be comparable to or sometimes even higher than their counterparts under higher rates, such that we indeed observe a longer and seemingly strong signal at the (+) channel (data not shown). But in fact, the SNR depends on the relative differences between counts at reactive versus unreactive sites, or alternatively, counts in the (+) versus (−) channels, which become negligible as fewer molecules are being modified. While frequent users of such probes are well aware of these tradeoffs and pitfalls, it is difficult to determine the quality and/or effective probing length based on eye inspection and/or acquired intuition alone. More subtle observations from Figure 2 include inverse correlation between a site's normalized reactivity magnitude and its RSD. This raises the question whether observed variations have meaningful impact on the overall quality of the reconstructed profile or perhaps they amount to small abso-lute perturbations. We address it by overlaying the tenth and ninetieth percentiles of the simulated MLE distribution on the true normalized profile, as shown in Figure 3 for select hit rates. Note that absolute reactivities scale with the rate, and therefore, we consider variation around a fixed normalized profile. One can see from the figure that indeed, the large RSDs translate into fairly small profile perturbations, such that reactive/unreactive sites are well discriminated. It is also apparent that low reactivities cannot be determined accurately and often are indistinguishable from zero, but their range is also confined to strongly indicate structural constraints. Effects of sequencing coverage The recent coupling of digital sequencing with structure probing not only facilitated multiplexing and increased throughputs, but also opened the door to more predictable experiment design through precise control over the volume of collected data. Previously, this was nontrivial, as CE platforms generate analog signals from which relative, but not absolute, quantities are determined. Several factors commonly affect the choice of sequencing volume, including platform availability (e.g., desktop versus large-scale machines), data processing cost, multiplexing capacity, and data quality. Next, we elucidate the dependence of measurement precision on the number of analyzed fragments, to shed light on tradeoffs associated with these factors and on relations with reagent concentration. Figure 4A shows RSDs obtained at rates 2 and 1, after a 10fold reduction in the number of reads. It illustrates the same trends as in Figure 2, but with considerably larger variations. These are even more pronounced for rates <1 (data not shown), and obviously, for lower read numbers. Figure 4B illustrates the overall degradation in quality of reconstruction for rates 0.5 and 0.1, with 10% of the reads. Again, effects are exacerbated at lower depths (data not shown). While it is expected that measurements under low hit kinetics are more susceptible to reductions in data size, Figure 4 shows that variation can be significant under high hit kinetics as well, especially at the 5 ′ region. In such cases, collecting fewer sequences might further shorten the effective probing length Nucleotide Position Normalized Reactivity and may be suitable only for very short RNAs. These results also demonstrate that one can compensate for effects of lowhit kinetics by collecting more data. This is particularly important in multiplexed settings, since less structured RNAs tend to be more reactive than highly structured ones, and may thereby attract more reagent molecules. In well-controlled conditions, sequencing deeper or populating the sample with more low-reactivity RNAs could bias the coverage toward them. However, such strategies apply predominantly to cell-free studies, where sample manipulation at the laboratory is common, but are irrelevant when RNA material is limited or sample composition cannot be easily altered. Analysis of transcriptome-wide mapping via random primer extension Length limitations inherent in detection via primer extension are apparent from our analysis and widely appreciated. Traditionally, long molecules were probed with multiple primers, carefully designed to anneal at intermediate locations (Wilkinson et al. 2008), a labor-intensive effort that also precludes de novo characterization. With the advent of NGS, techniques such as RNA-Seq leveraged multiple distinct hexamer primers capable of random pervasive annealing to enable transcriptome-wide (TW) studies. Alternatively, transcripts are fragmented into random templates that are ligated to adapter sequences, where primers are designed to bind. SHAPE-Seq and similar assays which rely on single primer extension (SPE) set the foundation to more advanced protocols that detect modifications via random primer extension (RPE) along with capabilities to probe structure in vivo (Ding et al. 2014;Rouskin et al. 2014;Talkish et al. 2014). Implementation and nuances differ between methods, but here we attempt to provide the broadest assessment of RPE-based strategies, because RPE may be coupled to a range of probes and is applicable in diverse conditions, and as such it opens up many more possibilities. For example, in vivo mapping was obtained with dimethyl sulfate (DMS), but a SHAPE-NAI probe has similar functionality (Spitale et al. 2013), whereas other probes enhance structural characterization at in vitro or near-in vivo conditions (Kielpinski and Vinther 2014;Wan et al. 2014). A useful property of RPE is that it circumvents 3 ′ directionality bias. Ideally, all modifications are equally amenable to detection, as a primer could drop, for example, in between the two modifications cartooned in Figure 1. Our SPE analysis warrants revisiting then, for balancing between too many and too few modifications may no longer be relevant. Furthermore, RPE spreads the reads (i.e., their 3 ′ end site) across a molecule, thereby redistributing the amount of information allocated per site. Intuitively, it improves signal quality near the 5 ′ end at the expense of reducing it near the SPE site, while obviating reliance on signal correction methods. In this work, we avoid detailed SPE versus RPE comparisons, since we view them as geared toward distinct endeavors, e.g., molecular engineering (Qi and Arkin 2014) versus genome-scale studies (Mortimer et al. 2014), respectively. Instead, we extended our model and analysis to capture key additional features of RPE and TW mapping data and to highlight new complexities and tradeoffs in design and informatics. An evident new challenge is that multiplexing is no longer easily manipulable. Biasing coverage toward select transcripts becomes nontrivial and furthermore, one now faces natural variation in abundances ranging over several orders of magnitude (Mortazavi et al. 2008). Consequent variation in effective coverage per RNA is a clear cause of SNR and performance differences, which, to date, has been circumvented with low-throughput targeted experiments (Kwok et al. 2013). Before discussing additional layers of complexity, we introduce two new design parameters: primer rate and fragment length range. Importantly, priming and fragmentation are equivalent from a modeling Rational design of RNA structure probing www.rnajournal.org 1869 perspective (see Materials and Methods), thus primer rate stands for the average density of RT start sites within either setting. Prior to sequencing, fragments are size-selected to obtain a library of fragments that are within an admissible range. Unlike the SPE case, the analysis below ties measurement quality to three pivotal factors, or design decisions, rather than directly to parameters. Moreover, it reveals how entangled these factors and decisions are. While a simple model suffices to render key performance determinants, it fails to capture additional real-world intricacies of ribonomic big data. Thus, we complement our in silico analysis with a qualitative discussion that elucidates finer details as well as conveys difficulties in their comprehensive treatment by simplistic data analysis schemes. To simplify exposition, we discuss primarily the primer-based DMS approach in Ding et al. (2014)-a natural extension of SPE. For the most part, results carry over to fragment-based methods (Rouskin et al. 2014;Talkish et al. 2014), but otherwise we specifically address them. (1) Ratio of hit to primer rates. To consider the ratio's effects, it is helpful to draw an analogy to SPE. In SPE, we es-sentially fix the primer density at 1 per the RNA length, and when changing hit rates we in fact modulate the ratio. Dynamics generally carry over to RPE, with two deviations: No stochasticity in priming location prevails in SPE; and RT stops due to primer encounters are unique to RPE. At this point, we note that our understanding of standard NGS protocols, integrated into our model, is that no strand displacement takes place at such encounters, and that RT aborts. Yet, similar models can accommodate nonstandard RT steps. Furthermore, our modeling assumption aligns with fragment-based approaches, where RT drops off at a template's end-the analog of a priming site, thus extending the scope of analysis. While fragment dynamics in SPE and RPE are not identical, the ratio presents a similar tradeoff, namely, decreased SNR due to background noise versus increased 3 ′ directionality bias (see Materials and Methods for formal analysis). For example, under small ratios, RPE features frequent consecutive primers, preventing RT from reaching adducts (see Fig. 5B, inset). Primer encounters have two undesired outcomes: (1) background noise and (2) (see Fig. 5B versus Fig. 5C,D). A straightforward way to reduce these encounters is to sparsely modify and prime, in which case long interprimer distances allow for significant natural RT drop off in between primers-yet another source of background. In fragment-based protocols, template-end background can be discerned and removed prior to sequencing via fragment selection (Rouskin et al 2014;Talkish et al. 2014). Yet, this trades economic inefficiency with experimental one, as the fraction of informative, modification-based, fragments remains small. As we discuss below, when biological material is limited, both remedies might be infeasible. High hit-to-primer rate ratios, in contrast, give rise to frequent consecutive adducts with no intermediary primera source of signal decay and information loss when reactivities are not uniform (see Fig. 5C, in particular highlighted regions). The analogy to SPE aims to recapitulate our previous points that avoiding signal decay and thereby reliance on nontrivial data correction does not necessarily translate into better-quality data, and that quantitative evaluation of design choices and informatics pipelines is beneficial. Notably, resemblance to SPE dynamics increases once high-pass cDNA filtering is introduced via lower size cutoff, as it imposes a fixed blind window downstream from each modification, which intensifies signal decay (see Fig. 5D, highlighted regions). For example, highly reactive sites located downstream from a modification and within this window frequently display modifications that "shadow" it (see Fig. 5D, inset). As we increase the hit rate, the more severe this directionality bias is. For a window size w, we can think of it as similar to positioning a primer w nucleotides downstream, which means the decay spans a window's size and can be difficult to spot for short windows. Other effects of fragment filtering are discussed next. (2) Fragment lower-size cutoff. Size-selection throws away potentially valuable information. While normally undesired, it is common practice when data entail complexities or ambiguities which are nontrivial to resolve. Mining information from NGS readouts has been an ongoing challenge in TW studies, mainly because read lengths are such that alignment to multiple genomic locations is common (Trapnell et al. 2010). Despite consistent increases in read lengths, the generation of short cDNAs is inherent to existing mapping methods, and is even more pervasive in experiment than in control. Ambiguous cDNA alignments then give rise to uncertainty with respect to a cDNA's true origin, translating into noisy hit counts per site. A straightforward remedy is to discard all ambiguously aligned reads, but that decreases total counts, or signal strength, and might leave some regions unmapped. One may also increase the size cutoff point as a means to reduce uncertainty in counts, but that too leaves us with less usable information. Either measure reduces both noise and signal power, making the composite effect on SNR difficult to predict. Furthermore, the extent of ambiguity in alignments is system-and reference-dependent. For example, transcriptomes often consist of multiple gene isoforms with substantial sequence overlap, that are absent from the matching genome reference. Isoform-level studies are then more prone to this issue than gene-level ones. At the same time, the extent of ambiguity is design-dependent, as it is tightly linked to the shape of the cDNA length distribution. When cDNAs are relatively short, larger fractions of them trigger uncertainty in comparison to data sets comprising of longer fragments. Length distributions largely depend on the sum of the hit and primer rates, which sets the interprimer/adduct distances (see Materials and Methods for detail). Sparse dynamics would then be more robust to this issue, but as we discuss next, they pose other critical challenges. (3) Sum of hit and primer rates. The total frequency of priming and modification events determines key features of the cDNA length distribution, e.g., its mean and variance. As mentioned, one can circumvent some confounding issues by targeting low hit and primer rates (i.e., sparse dynamics). Sparse dynamics yield fewer reads per molecule-a problematic outcome when biological material is limited to a degree that a "sequence deeper" brute force solution is infeasible. Taking the wide variation in RNA abundances into consideration, design also greatly depends on the transcripts of interest. Material limitations are analogous to limiting coverage per transcript. To get a sense of current capabilities and associated data quality, we revisit our SPE analysis with coverage anticipated based on recent work. For example, extended Figure 2 in Ding et al. (2014) shows maximal coverage close to 100 reads on average per site, obtained for a minute fraction of the RNAs from libraries of tens of millions of reads. For the P546 RNA, this amounts to an order of total 10 4 reads, whereas Figure 4 depicts an order of magnitude deeper coverage (4 × 10 5 ). If we allocate an average of 100 reads to a transcript of length 775 nt (profile shown in Fig. 5) and set hit and primer rates to 0.003 per site, our model predicts variation as shown in Figure 6, where lower rates or coverage display further degradation (data not shown). Critically, reported coverage-per-transcript ranges over five orders of magnitude, with merely a quarter of them featuring at least one read per site on average, yielding about 10 2 or more reads in the P546 example. Unfortunately, the need for deep coverage has not been assessed quantitatively, albeit highlighted qualitatively in Talkish et al. (2014). Notably, crude preliminary assessment does not require sophisticated models. Instead, one can bootstrap the data for preliminary quality measures, for example, by using the NGS-based approach introduced in Aviran et al. (2011a). Yet, prior in silico design is still useful. For example, we showed that the read-per-molecule yield also depends on the size cutoff, with sparse dynamics affording higher fractions of retained fragments, thus linking this factor to another design choice. Key principles of judicious design are well-captured by our model, but numerous other confounding factors are beyond its scope, some unique to structure mapping and others widely prevalent in functional genomics NGS assays. Interestingly, our experience is that some issues can be addressed by model-based statistical approaches, which typically treat most reads as valuable information and include them in analysis (Trapnell et al. 2010;Roberts et al. 2011). In what follows, we touch briefly upon factors we have become aware of while working in this field. (1) Nonuniform priming. Analysis of RNA-Seq data reveals systematic biases in cDNA generation, attributed to hexamer binding or fragmentation (Roberts et al. 2011). These biases introduce local signal amplitude changes, which might alter the relativity among inferred reactivities. Distortion may be more pronounced when a narrow range of fragment sizes is selected (e.g., 25-45-nt fragments in Rouskin et al. 2014), in which case the information per site originates from a short stretch of RT start sites. In other words, a narrow range localizes a perturbation's effect whereas a wide range smoothens it out. Note, in passing, that some published analyses alleviate these discrepancies by comparing normalized counts between control and experiment, with normalization accounting for transcript abundance and possibly length (Ding et al. 2014;Talkish et al. 2014). Local count normalization over 50-200-nt windows is carried out in Rouskin et al. (2014) to remedy fragmentation-specific artifacts at the 3 ′ end. Such heuristic could have somewhat compensated for nonuniformity, if normalization had spanned a similar window size (i.e., 45-25 = 20 nt). Instead, boosted fragmentation at a site would result in attenuation of all reactivities in a window of, say, 200 nt, whereas counts are effectively enriched within 25-45-nt upstream of that site. This not only leaves local perturbations in place, but also generates further imbalance in relativity in between normalization windows. (2) Multiple alignments and transcript abundances. Statistical uncertainty due to multiple alignments is intricately related to another confounding factor-unknown RNA abundances. Knowledge of relative abundances often implies that certain alignments are more probable than others, and this way, it can inform alignments, counts, and reactivities. For example, a subset of reads mapping to two isoforms would be split differently if the isoforms are equally or differentially expressed. In RNA-Seq, statistical methods resolve such ambiguities jointly with quantification of abundances, read error rates, and biases (Roberts et al. 2011), but one must keep in mind that mapping assays introduce additional complexity in the form of unknown reactivities. (3) Fragment upper size cutoff and ambiguous RT stops. A useful property of RPE is that no sequence information is needed a priori. But there is also no notion of full-length RNA template with well-defined ends, which makes it impossible to discern by sequencing alone between fragments arising from modification and those resulting from RT runs through template ends or bound primers. Current fragment-based methods (Rouskin et al. 2014;Talkish et al. 2014) approach this ambiguity experimentally by filtering all fragments of the latter type. From an informatics standpoint, Rouskin et al.'s approach is more brute force, as it discards more than just full-template copies, but nonetheless, both protocols throw away potentially valuable information. For example, if signal decay prevails, its correction relies on the number of successful elongations past a site (see Equation 4 in Materials and Methods), a quantity whose recovery may suffer bias due to missing information. It is interesting to note that this issue becomes negligible under sufficiently sparse conditions, because RT's imperfect processivity limits achievable cDNA lengths and chances to run through template ends or primers. This appears to be the case in Ding et al. (2014), although their approach may potentially account for primer encounters under different conditions through integration of (+) and (−) data into the reactivity estimates. (4) Protein-RNA interactions. A fundamental difference between in vitro and in vivo probing is the absence/presence of protein-RNA interactions (PRI), many of which are yet to be revealed. PRI can trigger structural rearrangements, and indeed, recent studies reveal global measurement differences between conditions (Kwok et al. 2013;Rouskin et al. 2014). Yet, observed changes may also be attributed to protein protection from modification by way of solvent inaccessibility (Kwok et al. 2013), yielding low reactivities. PRI thus give rise to ambiguity, as one cannot readily discern between structurally constrained regions and protein-bound ones from weak signal alone. This has been a long-standing challenge, but with recent breakthroughs and anticipated wealth Nucleotide Position Normalized Reactivity FIGURE 6. Variation in reactivities reconstructed by the scheme in Ding et al. (2014) and computed from 100,000 RPE simulations of 77.5 × 10 3 reads. Box boundaries mark tenth and ninetieth percentiles of the empirical distribution; plus signs mark target normalized SHAPE reactivities in the fictive 775 nt-long RNA depicted in Figure 5. Hit and primer rates are 0.003 per site, and shown is a middle window of reactivities to circumvent end-effects. of data, it becomes a critical barrier and possibly a primary bottleneck to accurate interpretation of in vivo data and their power to improve structure prediction. Clearly, this is unique to these emerging techniques, and more so, increasing the information content of these probes via statistics or deeper coverage does not seem plausible. We anticipate that progress will be achieved through integration and joint analysis of complementary assays. (5) Background noise. A (−) channel controls for RT's imperfect processivity, which generally features nonuniform, possibly structure-dependent, rates, with occasional spikes. Given the nonwhite nature of this noise, it is standard practice to integrate it at nucleotide resolution into SPE analysis. In RPE, this is also warranted and obtained in Ding et al. (2014) and Talkish et al. (2014) through comparisons of (−) and (+) readouts. There are several points one should keep in mind when integrating background. First, its magnitude depends on experimental conditions, which can be probe-dependent (e.g., DMS versus SHAPE), as well as on fragment lengths. Second, it is important to retain the same RNA structure in (−) and (+). This is problematic when randomly fragmenting, as each fragment adopts its own structure prior to the RT step. Since short fragments are quicker to denature when heated, they are advantageous for noise reduction (Rouskin et al. 2014). Third, when signal decay prevails, it is also present in the (−) channel, albeit more moderately (Aviran et al. 2011a). Decay can then become significant upstream of spikes or of sites with high noise levels. (6) Missing information near transcript ends. Coverage levels decline gradually toward the 3 ′ end due to shortening of regions accessible for hexamer priming. The longer the cDNA fragments are (on average), the more pervasive the associated SNR degradation is, rendering sparse conditions less ideal for short transcript studies. Near the 5 ′ end, information is lost when attempting to discriminate between fragmentation and modification by way of two size-selection rounds (Rouskin et al. 2014), leaving an unmapped stretch matching the length gap between rounds. (7) Comparative analysis. Our SPE analysis illustrates the role of profile normalization in facilitating comparisons. Commonly used normalization schemes bridge varying signal intensities (Low and Weeks 2010) and may successfully accommodate variation in coverage-per-transcript. However, we anticipate unprecedented diversity of structural profiles, encompassing a range of lengths, probes, and conditions, which would require thoughtful comparisons. A comprehensive framework is currently lacking, along with standardization of analysis routines, such that the entire process of analysis followed by normalization is meaningful. Software availability The computational tools developed for this study are freely available at http://www.bme.ucdavis.edu/aviranlab/ sms_software/. DISCUSSION We presented novel informatics methodology for assessing the precision and reproducibility of measurements obtained from an emerging class of assays that leverage NGS to dramatically enhance the throughput, scope, and efficiency of structural RNA studies. From a data analysis standpoint, NGS is also transformative by virtue of delivering digital readouts, as compared with previous readout of analog dye intensities. This new wealth of digital information provides opportunities to improve experiment design and reproducibility. In the case of structure mapping assays, we can now determine the number of collected reads and directly link it to measured quantities via computer simulations. Yet, measurements suffer from complex dependencies on reagent concentrations and on fragment size selection. Integration of mathematical models into simulations allows linkage of these experimental parameters to measurements as well as automation of data analysis (Aviran et al. 2011a). These new capabilities motivated us to use model-based simulations to elucidate effects of controllable parameters on data quality. While our work provides platform and conceptual framework for quantitative evaluation of these effects, its main contribution is in rendering the complexity of input-output relationships. Furthermore, our results highlight the difficulty in accurately determining them by intuition or visual data inspection. In SPE setting, we showed that factors such as reactivity magnitude and probing length modulate the SNR, and that the gradual quality degradation trend as hit rates decrease may reverse at some point. However, such events are case-specific and may not be readily detected. Similarly, it is difficult to infer an effective probing length for obtaining high-quality data through observation of a signal's strength. The advent of RPE shifts the scale of experiments and introduces additional parameters and confounding factors, bringing complexities to levels that warrant dedicated big data infrastructure for computer-aided design. Finally, one must keep in mind that tradeoffs are RNA-specific, and in and of itself, this justifies careful evaluation. The workflow we developed is useful for this purpose and will aid new users of these transformative technologies in gaining the intuition required for experiment design. At the core of our work is a model of SHAPE-Seq and similar chemistries. While modeling is what facilitates such study, it may also constrain its applicability as long as a model has not been thoroughly validated. In modeling SHAPE, we made two assumptions: (1) site-wise independence of measured features and (2) Poisson reaction dynamics. While the latter is standard in modeling biochemical or low-incidence reactions (Aviran et al. 2011b), the former is not yet fully established, likely because these methods gained popularity only recently. We thus anticipate that ongoing data collection will trigger much needed data-driven modeling (see, e.g., Bindewald et al. 2011;Sükösd et al. 2013), which we can then reiterate to refine the model and improve its Rational design of RNA structure probing www.rnajournal.org 1873 predictive power. There is also need to assess the degree of other noise and bias sources, for example, those incurred in NGS library preparation, although progress is being made in overcoming these issues (Jayaprakash et al. 2011;Shiroguchi et al. 2012;Ding et al. 2014). With the emergence of TW assays, additional modeling questions arise: (1) Does cDNA synthesis proceed through encounters with primers via strand displacement or does RT abort? and (2) Does modification interfere with primer binding by preclusion or biasing? Answers may be protocol-dependent (e.g., the choice of RT and reagent) and would alter the model and data properties, particularly the differences between the distributions in control and experiment, from which reactivities are derived (data not shown). A more overarching question concerns the new capacity for in vivo studies-do bound proteins interfere with the probing chemistry, for example, by protecting sites from modification (Kwok et al. 2013)? If yes, then how do they alter a structural signature and how can a model account for that? Nevertheless, we emphasize that the conceptual analysis framework we presented is generic in that it is not tied to any protocol and can be readily adapted to other experimental choices. The field of nucleic acid structure probing is rapidly evolving, with the maturation of recent techniques and the emergence of more complex ones that enhance scope to in vivo and TW studies. We believe that these advances should be accompanied by matching progress and refinement in informatics infrastructure, to aid in accelerating their optimization and adoption by the research community and to improve their robustness and fidelity. Alternatively, clever new experiments may resolve numerous issues, with the recent SHAPE-MaP (Siegfried et al. 2014) establishing exciting progress in this direction. In SHAPE-MaP, modified sites are encoded by incorporation of noncomplementary nucleotides in cDNA synthesis, where detection by sequencing amounts to careful and elaborate alignment and mismatch identification. Two additional libraries are needed to control for background and for sequence context effects on adduct detection likelihood. This new experimental paradigm eliminates the directionality inherent in the reviewed methods, thus vastly simplifying analysis by reducing it to site-by-site inference. This in turn eliminates some key issues we discussed, in particular those involving the relationship between priming and modification dynamics. SHAPE-MaP signal appears to exhibit dependencies on the hit rate, RT mutation rates (natural and adduct-induced), sequencing errors (which are platform-specific), alignment and read selection strategies, and coverage. Some of these dependencies may not be trivial, and it could be valuable to use our conceptual framework to gain more insight into this promising technique. Furthermore, the simplification of data analysis suggests that inhouse in silico optimization of such experiments may be readily feasible for experimentalists. The anticipated stream of ribonomic big data also highlights the importance of data-informed computational structure analysis, and indeed, much recent progress has been made in this domain (see, e.g., Deigan et al. 2008;Quarrier et al. 2010;Ding et al. 2012;Hajdin et al. 2013;Eddy 2014). It is of interest to quantify effects of data variation on structure prediction, for example, by concatenating such algorithms to our workflow, and further identifying which ones are more robust with respect to technical data variation. Model of SHAPE/DMS chemistry Since the principles of SHAPE, DMS, and other chemistries are similar from a modeling perspective (Weeks 2010), a SHAPE model is representative of several techniques. We consider an RNA sequence whose nucleotides (or sites) are numbered 1-n by their distance from the 3 ′ end, where a cDNA primer binds to initiate its extension by RT. In the (+) channel of a SHAPE experiment, the RNA is treated with an electrophile that reacts with conformationally flexible nucleotides to form 2 ′ -O-adducts (see Fig. 1). Each molecule may be exposed to varying numbers of electrophile molecules, where each exposure may result in a site's modification (i.e., adduct formation). We model the number of times an RNA molecule reacts with electrophile molecules as a Poisson process of unknown hit rate c > 0, that is, The site of adduct formation is determined by a probability distribution, denoted Θ = (θ 1 ,…,θ n ). One can think of this formulation as expressing a competition between n sites over an electrophile molecule, where θ k is site k's relative attraction power. We call Θ the normalized relative SHAPE reactivity profile, or in short, normalized profile, and we use it as a baseline for comparison of measurements taken across varying experimental conditions. In our model, the number of modifications at site k is also Poisson-distributed, with hit rate r k = cθ k ≥ 0, i.e., we have We therefore also consider the SHAPE reactivity profile R = (r 1 , …, r n ), which we estimate from sequencing data. R is a scaled version of the normalized profile Θ (R = c Θ), hence the r k 's do not form a probability distribution but rather sum to the hit rate c = n k=1 r k . Scaling by c implies that R lumps the modification intensity, or hit kinetics, into it while Θ is invariant to c. In practice, this means that changes in reagent concentration modulate R but not Θ, motivating us to use Θ for comparisons across modification conditions. In a control experiment, called (−) channel, the primary source of sequencing data is RT's imperfect processivity, resulting in its dropping off during transcription, potentially at varying rates across the molecule. We define the drop-off propensity at site k, γ k , to be the conditional probability that transcription terminates at site k, given that RT has reached this site. The parameters Γ = (γ 1 ,…,γ n ), 0 ≤ γ k ≤ 1 ∀k, characterize RT's natural drop off and are unknown and thus estimated jointly with R from data. Models of random primer extension (RPE) RPE diversifies the data, introducing variable start sites, i.e., a ( j,k)fragment now maps to sites j to k-1 in the RNA, with varying j and k. We introduce n parameters, Δ = (δ 1 ,…,δ n ), 0 ≤ δ k ≤ 1, which capture priming or cleavage affinities. The expressions below pertain to random priming, but with slight adaptation they would model fragmentation. Here, δ j is the probability that a hexamer binds sites j to j + 5. Three factors trigger RT stops: natural drop off, modification, or bound primer upstream of j + 5. In the (−) channel, only two factors take effect, yielding M 1 = Prob(( j, k)-fragment from primer) where Prob(( j,k)-fragment in (−) channel) = M 1 + M 2 . This is the probability of priming at j, not priming or dropping off anywhere between j + 6 and k − 1, and dropping off at k either naturally or due to a primer. In the (+) channel, experimental considerations affect the model since modification takes place prior to hexamer binding and may or may not preclude it, or it may merely bias it. This is not yet well understood and may be probe-dependent. DMS, for example, interferes with the Watson-Crick base-pairing face of adenines and cytosines, whereas SHAPE targets the backbone. Relevance to fragmentation is also unclear since cleavage occurs between nucleotides. Furthermore, hydroxyl radical probes of solvent accessibility and tertiary structure do not face this issue as they substitute modification for cleavage (Kielpinski and Vinther 2014), but they naturally fit in our analysis framework. For these reasons, simulation results reflect an assumption that modifications do not impede binding, but we developed and implemented in software a model describing mutually exclusive events. The chances of events are as follows: and Prob(( j,k)-fragment in (+) channel) = P 1 + P 2 + P 3 . Additionally, our software implements a reactivity reconstruction scheme described in Ding et al. (2014). Fragment-length range defaults to 25-500 nt. Poisson-based dynamics. It is helpful to simplify analysis by modeling modification and priming as two independent Poisson processes, with rates λ 1 and λ 2 per nucleotide, respectively. Note that this imposes equal rates per site, that is, uniform priming and equal reactivities. Since Poisson-based waiting times are memoryless, given an adduct at site k, the chances that the next event will be an adduct or a primer are λ 1 /(λ 1 + λ 2 ), λ 2 /(λ 1 + λ 2 ), respectively; hence, dynamics are governed by λ 1 /λ 2 . A more realistic model allows varying rates per nucleotide, as modeled for SPE, thus breaking the symmetry among sites. This means that some sites are modified more frequently than others, and that low-reactivity sites are more likely to "see" a shadowing adduct downstream than highly reactive ones. Rational design of RNA structure probing www.rnajournal.org 1875 SHAPE data To render simulations realistic, we used available SHAPE profiles, which we normalized and set as the true structural signatures to be fixed throughout simulations. For SPE, we focused on short RNAs, since RT's imperfect processivity results in loss of signal typically within a few hundreds of nucleotides, an effect that is expedited in high hit kinetics. For illustration purposes, we chose the 155-nt long P546 domain of the bI3 group I intron, quantified via SHAPE-CE (Deigan et al. 2008). It has an attractive property that it is well-balanced such that reactivities of various magnitudes are spread fairly evenly. Our simulations also rely on quantified RT natural drop-off likelihoods, and despite being determined during analysis, these auxiliary measures are not typically reported along with the reactivities. We therefore fixed γ k 's to be within 0.005-0.01, an average drop-off probability range we calculated from SHAPE-Seq data (Mortimer et al. 2012). These values also align with SHAPE-CE estimates (Wilkinson et al. 2008). For RPE, we considered very long transcripts in order to faithfully emulate sparse reaction dynamics and realistic mRNA lengths and also to avoid end effects. We mimicked a long transcript through concatenation of multiple copies of a characterized short RNA. Empirical MLE distribution We empirically assessed the distribution of estimates per site by drawing N = 10 5 independent samples (with replacement) of 4 × 10 6 reads from the distributions in Equations 1-2 and running them through MLE. The large sample size was chosen to ensure that the sample variance, s 2 = 1/(N − 1) N i=1 (Q i − Q) 2 , Q = 1/N N i=1Q i , which we treat as if it were the true variance, would be narrowly distributed around the true value. Since each Binomial k-fragment distribution is nearly Gaussian for large read numbers, we first adjusted N such that SD(s 2 k )/m k = 2/(N − 1) √ s 2 k /m k is negligible at all sites, where μ k and s 2 k are the mean and variance of the Gaussian approximation at k. However, the distribution of estimates is not necessarily Gaussian, in which case the above calculation may not apply. To remedy a situation where the RSD is higher than that under the Gaussian assumption, we increased N by an additional two orders of magnitude, to obtain N = 10 5 .
2017-05-10T07:45:15.615Z
2014-12-01T00:00:00.000
{ "year": 2014, "sha1": "1247ab3a7e3ddd0aeeef13f33d0e342beb233203", "oa_license": null, "oa_url": "http://rnajournal.cshlp.org/content/20/12/1864.full.pdf", "oa_status": "BRONZE", "pdf_src": "Adhoc", "pdf_hash": "a5c82e80bca21cb0bd01e9f62df00d337aafedaa", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
250693534
pes2o/s2orc
v3-fos-license
Perspectives for Testing Quantum Aspects of Gravity using LISA . LISA should be able to detect the gravitational waves from the QNM ringdown of supermassive black holes in the 10 5 - 10 8 solar mass range. On the other hand, it is reasonable to think that any quantum theory of gravitation should impose the quantization of the energy levels of these QNM. Here we discuss the possibility of distinguishing quantum aspects of gravity using LISA to observe QNM overtones of highly excited supermassive black holes Introduction Gravitational waves can not escape from within the event horizon of a black hole.However, outgoing waves which originate at the spacetime outside the horizon are possible and are the so called quasinormal modes (QNMs).There is an extensive amount of literature on QNMs [1,2,3,4,5,6], which include good reviews [7,8]. QNMs of perturbed black black holes are the characteristic damped oscillations which display the signatures of these objects.It is expected that they will provide the definitive observational evidence on the existence of black holes.In fact, since QNMs depends uniquely on black hole's mass and angular momentum, they will give us the possibility to determine these parameters by performing black hole gravitational wave spectroscopy.Besides, it has been speculated that QNMs might provide not only the classical signature of black holes but also give some information about their quantum aspects [9].This opens an interesting possibility to testing quantum aspects of gravity by observing QNMs from existing astrophysical black holes.LISA and other laser interferometer antenna projects in space, such as BBO and DECIGO, seem to be the best choices for observing gravitational waves coming from these QNMs, because the expected signal-to-noise ratio can be enormous in many possible scenarios. Based on Bohr's correspondence principle, it is possible to think of the high overtones of the ringing frequencies of a black hole as being equally spaced [10].On the other hand the lowest QNM frequency might be related to the mean irreducible mass associated with the quantum ergosphere [11]. Therefore the quantum aspects of gravity might be imprinted either on the asymptotic behavior of the high overtones (on their relative frequency separation) or on the wavelength of the fundamental ones (their absolute frequencies), or on both. QNMs excited by coalescence of supermassive black holes (SMBHs) from galaxy mergers or from newly formed SMBHs, among other astrophysical events, are very important sources for LISA [12]. LISA should be able to detect the gravitational waves from the QNM ringdown of supermassive black holes in the 10 5 -10 8 solar mass range throughout the observable Universe [13], as well as having the potential to perform no-hair tests [14]. Here we analyze the perspectives for testing quantum aspects of gravity using LISA. Quasinormal Modes of Supermassive BHs and Possible Astrophysical Scenarios of Excitation From theoretical calculations it appears that there is a high probability that most energy is emitted in the fundamental mode of the quadrupole (l=2) [15], leaving the other overtones of the quadrupole and the other multipoles with a much smaller portion of the total energy emitted.The physical explanation of this might be related to the time scale of the free falling ("plunge") of bodies from the innermost stable circular orbit (~ 6GM/c 3 ) into a BH and the last stable orbit period (~ 2π .6GM/c 3 ), which are both around the period of the fundamental mode of the quadrupole (~ 2.7 .6GM/c 3 ).In other words, the time scales of most of the physical processes that excite the BH are closely tuned to the fundamental (n=0, l=2) QNM.This makes the amount of energy that goes to the other modes very sensitive to the details of the astrophysical event that produces excitation.For a "point test particle" of mass m falling radially into a Schwarzschild black hole of mass M >> m, Davis et al. 1971 found the distribution of the total energy to the multipoles l=2, l=3, l=4, l=5, l=6 to be about 0.879, 0.105, 0.0134, 0.00191, 0.000268, respectively. Two important ingredients that should be taken into account are the rotation of BHs [20,21] and the accretion of matter (dust shells and thick accretion disks) onto the black hole [22,23,24,25]. We should analyze how they will affect the precision of the QNM frequency measurement and, therefore, on our capability of testing quantum aspects of gravity with these measurements. QNMs of SMBHs can be strongly excited and emit gravitational waves under a finite number of possibilities.We may consider the following: -the formation of a SMBH from a protostar; -the coalescence of two smaller SMBHs; -the capture of stars by a SMBH; -the falling of large amounts of baryonic matter (rocks, dust, and gas) into the SMBH. It is possible that there are other exotic possibilities (maybe involving cosmic strings or dark matter), but we will restrict our analysis to the above set.After a quick inspection of the above list, we can conclude that almost all major possible sources of excitation of the QNMs of SMBHs have time scales related to the free falling time ("plunge") from the innermost stable circular orbit (from ~ 3 RSch = RISCO) and the last stable orbit periods.The exception might be the formation of the SMBH from a supermassive very low metalicity protostar and the cases close to equal-mass SMBHs. Analysis and Discussion Considering Future Observations from LISA Using numerical relativity simulations of non-spinning binary black holes mergers Berti et al. 2006 and 2007 [13,14] analyzed the problem of detecting ringdown waveforms and of estimating the source parameters, showing that LISA has the potential to perform no-hair tests of general relativity.They computed the expected signal-to-noise ratio for ringdown events, the relative parameter estimation accuracy, and the resolvability of different modes.They also discussed the extent to which uncertainties on physical parameters, such as the black hole spin and the energy emitted in each mode, will affect the ability of performing black hole spectroscopy.Ioka and Nakano 2007 [26] also studied the problem of higher perturbative order of QNMs in binary BH mergers.They found that the secondorder QNMs (l=4) have frequencies twice those of the first-order ones (l=2) and the GW amplitude is up to ~10% of that of the first order one, in agreement with the previous findings of Davis et al. 1971 [15].They also compared these characteristic GW amplitude curves (first-, second-, and with thirdorder) with the sensitivity curves of LISA and Ultimate DECIGO. How feasible is it testing quantum aspects of gravity using LISA and others laser interferometer projects in space? One really promising thing when we talk about these detectors is: they will measure gravitational waves from astrophysical events with huge signal to noise ratios, especially when they come from SMBHs.Some of these events will be seen at the border of the observable universe. In order to perform our analysis we are going to choose one event with high signal to noise ratio and, likewise, with smaller errors for the determination of the astrophysical parameters such as rotation (spin).Our SMBH, therefore, should have its QNMs in the highest sensitivity band of LISA, namely from ~ 2 to 20 mHz, which is in agreement with the results found by Berti et al. 2006 [13].Our SMBH will have 3.7 x 10 6 solar masses, which is the mass of the putative BH at the center of our Milky Way.For scenarios at high redshifts the total mass should go down from the 3.7 x 10 6 solar mass value by a factor of (z + 1). In Figure 1 we plotted the characteristic GW amplitude (empirical) curves for the fundamental (n=0) and the seven first excited overtones plus the 70 th excited overtone for the first-order (l=2, quadrupole) QNM of a Schawzschild BH for three possible astrophysical scenarios.The excited overtones 8 th to 69 th were omitted in order to avoid overloading the graph with curves.The LISA sensitivity curve plotted is for bursts and S/N ~ 5.This means that the fundamental and the first 70 overtones for l=2 are detectable with this chosen sensitivity threshold.A Kerr BH would have its QNMs shifted in frequency to the right, to higher frequencies. We assumed for this calculation the parameters listed in Table 1.The total amplitude emitted in the form of GWs follows the known expression [27,28,29]: The partition of energies among the multimodes were assumed to follow the one proposed by Davis et al. 1971 [15], and the partition among the overtones were assumed to be proportional to the ratios Q n 2 /ΣQ n 2 , where Q n = π f n t n is the quality factor of the QNM overtone n, and ΣQ n 2 is the sum of all (infinite) Q n 2 .This partition is assumed valid only in the case the astrophysical events that excite the QNMs have their Fourier peak below the fundamental mode (n=0) frequency.This is not the case of equal-mass coalescences, for example, but it is when m captured << M SMBH .The characteristic GW amplitude curves plotted in Figure 1 are empirical, but are in agreement with the shape of the curves found by Ioka and Nakano 2007 [26]. The results might be off from a rigorous calculation, but they are satisfactory for the point we wish to make.It is clear from this graph that the fundamental overtone, when emitted, masquerades the shape of the other overtones, making it difficult for one to determine their nominal frequencies and Qs.Even using special techniques as the ones mentioned by Berti et al. 2006 and2007 [13, 14], it will almost be impossible to determine frequencies of very high overtones with the required precision for measuring the frequency spacing.The high overtones form a kind of single "flat" background signal.Perhaps only the fundamental and the five first excited overtones will be determined with any satisfactory precision. The situation might be a little bit better in the cases of close to equal-mass coalescences.Medium overtones (n ~ 10) might be more excited compared to the ones with the partition of energy assumed above, but still this does not help the very high overtones (n > 100) to be measured with precision.If one wants to find quantum aspects of gravity from SMBH QNMs measured using LISA, one has probably to find them in the absolute frequencies and spacing among the fundamental and first overtones. It is puzzling to note that the angular frequency (ω) of the fundamental QNM for l=2 is only 20% off from the interesting expression: , where π L P 2 is the area of a sphere with a diameter equal to the Planck length, and M is the mass of the star; which can be rewritten as: , where M P is the Planck mass is T P is the Planck time. Hawking Radiation There is one curious coincidence connecting the Hawking radiation, SMBHs, and the laser interferometer space antennas such as LISA, BBO, and DECIGO.Even though it is strange to talk about black body radiation for "thermal" temperatures of 10 -11 K to 10 -15 K, these are the temperatures where SMBHs in the 10 4 -10 8 solar mass range will peak in the sensitive band (10 -4 -1 Hz) of LISA, BBO and DECIGO for the Hawking gravitational radiation.The flux emitted in this band, however, is negligible and, so is the correspondent h.This is a pity, because the Hawking radiation on gravitational waves would be an interesting tool for probing quantum aspects of gravity.Only for very low mass black holes might the flux (of high frequency gravitational waves) be measurable some day. Conclusion Three astrophysical events involving Schawzschild BHs were chosen as examples that would produce the same set of characteristic GW amplitude curves for LISA, with high signal to noise ratio.The curves were calculated empirically, assuming a partition of energies among the multimodes and among the overtones of the quadrupolar mode and that general relativity is the underlying theory of gravity (could other theories give significantly different results?).The signal to noise ratio was strong enough to keep up to the 70 th excited overtone within the reach of the burst sensitivity curve for LISA with S/N ~ 5. From these curves it was apparent that the fundamental overtone, when emitted, masquerades the shape of the other overtones, making it difficult for one to determine their nominal frequencies and Qs.Even using special techniques it is unlikely that the frequencies of very high overtones can be determined with the required precision for measuring the frequency spacing. From the measurements of SMBH QNM using LISA, it is likely that only the fundamental and first overtones will be available for one to look for quantum aspects of gravity.Perhaps the frequencies of the fundamental QNMs themselves are directly related to quantum principles. Figure 1 . Figure 1.Characteristic GW amplitude (empirical) curves for the fundamental (n=0) and the seven first excited overtones plus the 70 th excited overtone for the first-order (l=2, quadrupole) QNM of a Schawzschild BH for three possible astrophysical scenarios assuming general relativity as the underlying theory of gravity.We kept the mass falling into the SMBH at least 10 times smaller, otherwise the distribution of energy among the QNMs would be different from the one assumed.On the right a table gives the set of parameters for the l=2 QNM overtones of a 3.7 x 10 6 solar mass Schwarzschild black hole.
2019-04-19T13:08:04.382Z
2009-03-01T00:00:00.000
{ "year": 2009, "sha1": "6eec668272ff461aee4d9f76fb6cc23188fa76aa", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/154/1/012042", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "387035203ff3d074341148f0dab9d1c620c5e362", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
10202122
pes2o/s2orc
v3-fos-license
SemEval-2013 Task 11: Word Sense Induction & Disambiguation within an End-User Application In this paper we describe our Semeval-2013 task on Word Sense Induction and Disambiguation within an end-user application, namely Web search result clustering and diver-sification. Given a target query, induction and disambiguation systems are requested to cluster and diversify the search results returned by a search engine for that query. The task enables the end-to-end evaluation and comparison of systems. Introduction Word ambiguity is a pervasive issue in Natural Language Processing. Two main techniques in computational lexical semantics, i.e., Word Sense Disambiguation (WSD) and Word Sense Induction (WSI) address this issue from different perspectives: the former is aimed at assigning word senses from a predefined sense inventory to words in context, whereas the latter automatically identifies the meanings of a word of interest by clustering the contexts in which it occurs (see (Navigli, 2009;Navigli, 2012) for a survey). Unfortunately, the paradigms of both WSD and WSI suffer from significant issues which hamper their success in real-world applications. In fact, the performance of WSD systems depends heavily on which sense inventory is chosen. For instance, the most popular computational lexicon of English, i.e., WordNet (Fellbaum, 1998), provides fine-grained distinctions which make the disambiguation task quite difficult even for humans (Edmonds and Kilgarriff, 2002;Snyder and Palmer, 2004), although disagreements can be solved to some extent with graph-based methods (Navigli, 2008). On the other hand, although WSI overcomes this issue by allowing unrestrained sets of senses, its evaluation is particularly arduous because there is no easy way of comparing and ranking different representations of senses. In fact, all the proposed measures in the literature tend to favour specific cluster shapes (e.g., singletons or all-in-one clusters) of the senses produced as output. Indeed, WSI evaluation is actually an instance of the more general and difficult problem of evaluating clustering algorithms. Nonetheless, many everyday tasks carried out by online users would benefit from intelligent systems able to address the lexical ambiguity issue effectively. A case in point is Web information retrieval, a task which is becoming increasingly difficult given the continuously growing pool of Web text of the most wildly disparate kinds. Recent work has addressed this issue by proposing a general evaluation framework for injecting WSI into Web search result clustering and diversification (Navigli and Crisafulli, 2010;Di Marco and Navigli, 2013). In this task the search results returned by a search engine for an input query are grouped into clusters, and diversified by providing a reranking which maximizes the meaning heterogeneity of the top ranking results. The Semeval-2013 task described in this paper 1 adopts the evaluation framework of Di Marco and Navigli (2013), and extends it to both WSD and WSI systems. The task is aimed at overcoming the wellknown limitations of in vitro evaluations, such as those of previous SemEval tasks on the topic (Agirre and Soroa, 2007;Manandhar et al., 2010), and enabling a fair comparison between the two disambiguation paradigms. Key to our framework is the assumption that search results grouped into a given cluster are semantically related to each other and that each cluster is expected to represent a specific meaning of the input query (even though it is possible for more than one cluster to represent the same meaning). For instance, consider the target query apple and the following 3 search result snippets: 1. Apple Inc., formerly Apple Computer, Inc., is... 2. The science of apple growing is called pomology... 3. Apple designs and creates iPod and iTunes... Participating systems were requested to produce a clustering that groups snippets conveying the same meaning of the input query apple, i.e., ideally {1, 3} and {2} in the above example. Task setup For each ambiguous query the task required participating systems to cluster the top ranking snippets returned by a search engine (we used the Google Search API). WSI systems were required to identify the meanings of the input query and cluster the snippets into semantically-related groups according to their meanings. Instead, WSD systems were requested to sense-tag the given snippets with the appropriate senses of the input query, thereby implicitly determining a clustering of snippets (i.e., one cluster per sense). Dataset We created a dataset of 100 ambiguous queries. The queries were randomly sampled from the AOL search logs so as to ensure that they had been used in real search sessions. Following previous work on the topic Di Marco and Navigli, 2013) we selected those queries for which a sense inventory exists as a disambiguation page in the English Wikipedia 2 . This guaranteed that the selected queries consisted of either a single word or a multiword expression for which we had a collaborativelyedited list of meanings, including lexicographic and encyclopedic ones. We discarded all queries made 2 http://en.wikipedia.org/wiki/Disambiguation page up of > 4 words, since the length of the great majority of queries lay in the range [1,4]. In Table 1 we compare the percentage distribution of 1-to 4-word queries in the AOL query logs against our dataset of queries. Note that we increased the percentage of 3-and 4-word queries in order to have a significant coverage of those lengths. Anyhow, in both cases most queries contained from 1 to 2 words. Note that the reported percentage distributions of query length is different from recent statistics for two reasons: first, over the years users have increased the average number of words per query in order to refine their searches; second, we selected only queries which were either single words (e.g., apple) or multi-word expressions (e.g., mortal kombat), thereby discarding several long queries composed of different words (such as angelina jolie actress). Finally, we submitted each query to Google search and retrieved the 64 top-ranking results returned for each query. Therefore, overall the dataset consists of 100 queries and 6,400 results. Each search result includes the following information: page title, URL of the page and snippet of the page text. We show an example of search result for the apple query in Figure 1. Dataset Annotation For each query q we used Amazon Mechanical Turk 3 to annotate each query result with the most suitable sense. The sense inventory for q was obtained by listing the senses available in the Wikipedia disambiguation page of q augmented with additional options from the classes obtained from the section headings of the disambiguation page plus the OTHER catch-all meaning. For instance, consider the apple query. We show its disambiguation page in Figure For each query we ensured that three annotators tagged each of the 64 results for that query with the most suitable sense among those in the sense inventory (selecting OTHER if no sense was appropriate). Specifically, each Turker was provided with the following instructions: "The goal is annotating the search result snippets returned by Google for a given query with the appropriate meaning among those available (obtained from the Wikipedia disambiguation page for the query). You have to select the meaning that you consider most appropriate". No constraint on the age, gender and citizenship of the annotators was imposed. However, in order to avoid random tagging of search results, we provided 3 gold-standard result annotations per query, which could be shown to the Turker more than once during the annotation process. In the case (s)he failed to annotate the gold items, the annotator was automatically excluded. Inter-Annotator Agreement and Adjudication In order to determine the reliability of the Turkers' annotations, we calculated the individual values of Fleiss' kappa κ (Fleiss, 1971) for each query q and then averaged them: where κ q is the Fleiss' kappa agreement of the three annotators who tagged the 64 snippets returned by the Google search engine for the query q ∈ Q, and Q is our set of 100 queries. We obtained an average value of κ = 0.66, which according to Landis and Koch (1977) can be seen as substantial agreement, with a standard deviation σ = 0.185. In Table 2 we show the agreement distribution of our 6400 snippets, distinguishing between full agreement (3 out of 3), majority agreement (2 out of 3), and no agreement. Most of the items were annotated with full or majority agreement, indicating that the manual annotation task was generally doable for the layman. We manually checked all the cases of majority agreement, correcting only 7.92% of the majority adjudications, and manually adjudicated all the snippets for which there was no agreement. We observed during adjudication that in many cases the disagreement was due to the existence of subtle sense distinctions, like between MORTAL KOM-BAT (VIDEO GAME) and MORTAL KOMBAT (2011 VIDEO GAME) per query on average). Scoring Following Di Marco and Navigli (2013), we evaluated the systems' outputs in terms of the snippet clustering quality (Section 3.1) and the snippet diversification quality (Section 3.2). Given a query q ∈ Q and the corresponding set of 64 snippet results, let C be the clustering output by a given system and let G be the gold-standard clustering for those results. Each measure M (C, G) presented below is calculated for the query q using these two clusterings. The overall results on the entire set of queries Q in the dataset is calculated by averaging the values of M (C, G) obtained for each single test query q ∈ Q. Clustering Quality The first evaluation concerned the quality of the clusters produced by the participating systems. The Rand Index (RI) of a clustering C is a measure of clustering agreement which determines the percentage of correctly bucketed snippet pairs across the two clusterings C and G. RI is calculated as follows: where TP is the number of true positives, i.e., snippet pairs which are in the same cluster both in C and G 1 n 11 n 12 · · · n 1m a 1 G 2 n 21 n 22 · · · n 2m a 2 . . . . . . . . . . . . . . . . . . . (3) where E(RI(C, G)) is the expected value of the RI. Using the contingency table reported in Table 3 we can quantify the degree of overlap between C and G, where n ij denotes the number of snippets in common between G i and C j (namely, n ij = |G i ∩ C j |), a i and b j represent, respectively, the number of snippets in G i and C j , and N is the total number of snippets, i.e., N = 64. Now, the above equation can be reformulated as: . (4) The ARI ranges between −1 and +1 and is 0 when the index equals its expected value. Jaccard Index (JI) is a measure which takes into account only the snippet pairs which are in the same cluster both in C and G, i.e., the true positives (TP), while neglecting true negatives (TN), which are the vast majority of cases. JI is calculated as follows: Finally, the F1 measure calculates the harmonic mean of precision (P) and recall (R). Precision determines how accurately the clusters of C represent the query meanings in the gold standard G, whereas recall measures how accurately the different meanings in G are covered by the clusters in C. We follow Crabtree et al. (2005) and define the precision of a cluster C j ∈ C as follows: where C s j is the intersection between C j ∈ C and the gold cluster G s ∈ G which maximizes the cardinality of the intersection. The recall of a query sense s is instead calculated as: where C s is the subset of clusters of C whose majority sense is s, and n s is the number of snippets tagged with query sense s in the gold standard. The total precision and recall of the clustering C are then calculated as: where S is the set of senses in the gold standard G for the given query (i.e., |S| = |G|). The two values of P and R are then combined into their harmonic mean, namely the F1 measure: Clustering Diversity Our second evaluation is aimed at determining the impact of the output clustering on the diversification of the top results shown to a Web user. To this end, we applied an automatic procedure for flattening the clusterings produced by the participating systems to a list of search results. Given a clustering C = (C 1 , C 2 , . . . , C m ), we add to the initially empty list the first element of each cluster C j (j = 1, . . . , m); then we iterate the process by selecting the second element of each cluster C j such that |C j | ≥ 2, and so on. The remaining elements returned by the search engine, but not included in any cluster of C, are appended to the bottom of the list in their original order. Note that systems were asked to sort snippets within clusters, as well as clusters themselves, by relevance. Since our goal is to determine how many different meanings are covered by the top-ranking search results according to the output clustering, we used the measures of S-recall@K (Subtopic recall at rank K) and S-precision@r (Subtopic precision at recall r) (Zhai et al., 2003). S-recall@K determines the ratio of different meanings for a given query q in the top-K results returned: where sense(r i ) is the gold-standard sense associated with the i-th snippet returned by the system, and g is the total number of distinct senses for the query q in our gold standard. S-precision@r instead determines the ratio of different senses retrieved for query q in the first K r snippets, where K r is the minimum number of top results for which the system achieves recall r. The measure is defined as follows: Baselines We compared the participating systems with two simple baselines: • SINGLETONS: each snippet is clustered as a separate singleton cluster (i.e., |C| = 64). These baselines are important in that they make explicit the preference of certain quality measures towards clusterings made up with a small or large number of clusters. 4 Systems 5 teams submitted 10 systems, out of which 9 were WSI systems, while 1 was a WSD system, i.e., using the Wikipedia sense inventory for performing the disambiguation task. All systems could exploit the information provided for each search result, i.e., URL, page title and result snippet. WSI systems were requested to use unannotated corpora only. We asked each team to provide information about their systems. In Table 4 we report the resources used by each system. The HDP and UKP systems use Wikipedia as raw text for sampling word counts; DULUTH-SYS9-PK2 uses the first 10,000 paragraphs of the Associated Press wire service data from the English Gigaword Corpus (Graff, 2003, 1st edition), whereas DULUTH-SYS1-PK2 and DULUTH-SYS7-PK2 both use the snippets for inducing the query senses. Finally, the UKP systems were the only ones to retrieve the Web pages from the corresponding URLs and exploit them for WSI purposes. They also use WaCky (Baroni et al., 2009) and a distributional thesaurus obtained from the Leipzig Corpora Collection 6 (Biemann et al., 2007). SATTY-APPROACH1 just uses snippets. Results We show the results of RI and ARI in Table 5. The best performing systems are those from the HDP team, with considerably higher RI and ARI. The next best systems are SATTY-APPROACH1, which uses only the words in the snippets, and the only WSD system, i.e., RAKESH. SINGLETONS perform well with RI, but badly when chance agreement is taken into account. As for F1 and JI, whose values are shown in Table 6, the two HDP systems again perform best in terms of F1, and are on par with UKP-WSI-WACKY-LLR in terms of JI. The third best approach in terms of F1 is again SATTY-APPROACH1, which however per- forms badly in terms of JI. The SINGLETONS baseline clearly obtains the best F1 performance, but the worst JI results. The ALL-IN-ONE baseline outperforms all other systems with the JI measure, because TN are not considered, which favours large clusters. To get more insights into the performance of the various systems, we calculated the average number of clusters per clustering produced by each system and compared it with the gold standard average. We also computed the average cluster size, i.e., the average number of snippets per cluster. The statistics are shown in Table 7. Interestingly, the best performing systems are those with the cluster number and average number of clusters closest to the gold standard ones. This finding is also confirmed by Figure 3, where we draw each system according to its average values regarding cluster number and size: again the distance from the gold standard is meaningful. We now move to the diversification perfor- mance, calculated in terms of S-recall@K and S-precision@r, whose results are shown in Tables 8 System K 5 10 20 Table 9: S-precision@r. and 9, respectively. Here we find that, again, the HDP team obtains the best performance, followed by RAKESH. We note however that not all systems optimized the order of clusters and cluster snippets by relevance. Conclusions and Future Directions One of the aims of the SemEval-2013 task on Word Sense Induction & Disambiguation within an End User Application was to enable an objective comparison of WSI and WSD systems when integrated into Web search result clustering and diversification. The task is a hard one, in that it involves clustering, but provides clear-cut evidence that our end-to-end application framework overcomes the limits of previous in-vitro evaluations. Indeed, the systems which create good clusters and better diversify search results, i.e., those from the HDP team, achieve good performance across all the proposed measures, with no contradictory evidence. S-recall-at-K K hdp-lemma hdp-nolemma sys1.pk2 sys7.pk2 sys9.pk2 satty-approach1 ukp-wsi-wacky-llr ukp-wsi-wp-llr2 ukp-wsi-wp-pmi rakesh S-precision-at-r r hdp-lemma hdp-nolemma sys1.pk2 sys7.pk2 sys9.pk2 satty-approach1 ukp-wsi-wacky-llr ukp-wsi-wp-llr2 ukp-wsi-wp-pmi rakesh Our annotation experience showed that the Wikipedia sense inventory, augmented with our generic classes, is a good choice for semantically tagging search results, in that it covers most of the meanings a Web user might be interested in. In fact, only 20% of the snippets was annotated with the OTHER class. Future work might consider large-scale multilingual lexical resources, such as BabelNet (Navigli and Ponzetto, 2012), both as sense inventory and for performing the search result clustering and diversification task.
2014-07-01T00:00:00.000Z
2013-06-01T00:00:00.000
{ "year": 2013, "sha1": "da033ab235296ce5978e4d4e620e571b8ff0b6d2", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ACL", "pdf_hash": "2ddc63680dbe21b4b0ae713c6f39bdafb02f9de2", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
219604490
pes2o/s2orc
v3-fos-license
Credit Risk Assessment for Small and Microsized Enterprises Using Kernel Feature Selection-Based Multiple Criteria Linear Optimization Classifier: Evidence from China , Introduction Nowadays, faced with the challenges of economic globalization, credit risk assessment of small and microsized enterprises has attracted more and more considerable attention from financial market investors, financial regulators, and national governments. e main purpose of credit risk assessment is to measure the possibility of enterprises' default through qualitative analysis and quantitative calculation of factors that may lead to credit risk and to provide the basis for credit decision-making and risk prevention of banks. In reality, a small increase in the level of bad credit level of enterprises will lead to huge losses to financial institutions [1]. us, if we can accurately evaluate the credit risk of an enterprise, it will not only promote the improvement of the enterprise's own risk management level, but also help the bank to effectively prevent the enterprise's default risk, thereby effectively improving the operation efficiency of the whole capital market. Small and microsized enterprises have been recognized as the predominant type of business units in most Asian economies. In the recent years, small and microsized enterprises have played an increasingly important role in promoting economic growth, increasing employment opportunities, and creating industries. For example, according to the Ministry of Commerce of China, small and microsized enterprises contribute 60% of GDP, provide 80% of urban employment opportunities, and introduce 75% of new products, accounting for 65% of patents and inventions. us, the important role played by Chinese small and microsized enterprises can be seen obviously. erefore, it is necessary to establish a credit risk indicator system and a credit risk assessment model, especially for small and microsized enterprises. However, the construction of the small and microsized enterprises' credit risk indicator system is different from that of the large enterprises. For instance, the financial information of small and microsized enterprises is incomplete and opaque, thereby cannot objectively and truly reflect the comprehensive credit risk status of SMEs. So, it is far from enough to only use financial information, whereas it is mainly used by large enterprises. How to build a comprehensive credit risk indicator system by including some behavior variables, supervision variables, and other novel related variables becomes the main focus in today's SMEs' credit risk assessment. Moreover, the improvements and optimization of models can further increase the predictive accuracy to better help enterprises and financial institutions with their risk management and risk prevention. Recently, many researchers have paid great attention to improve the algorithm of the credit risk assessment models, including statistical models, intelligent models, and optimization models. Evidence indicates that the models used nowadays are more advanced and complicated than before, and these models are also proved to be more effective than before. For instance, Zhang et al. proposed a sparse multicriteria optimization classifier to deal with credit risk assessment. e results showed that the proposed model is more efficient and has better interpretability as well as generalization power [2]. Zhang et al. presented an improved sequential minimal optimization learning algorithm named FV-SMO by using the credit data from the Chinese Banking Regulatory Commission. e experimental results demonstrated that FV-SMO performed much better in saving the computational cost and increasing predictive accuracy compared with the other five state-of-the-art classification methods in enterprises' credit risk assessment [3]. However, few of them study how to carry out feature selection and classification simultaneously. In this paper, firstly, we construct a credit risk indicator system especially for Chinese SMEs, which contains six parts, that is, basic information, financial information, actual controllers' information, behavior information, supervision information, and policy information. Secondly, we improve the MCLOC from two aspects. Firstly, the one-norm of feature kernel weight vector is introduced into the objective function of MCLOC for feature selection and data dimensionality reduction. Secondly, the kernel feature selection is introduced, and the importance of each feature is expressed by the weight of the kernel feature. e empirical results show that the KFS-MCLOC not only shows high accuracy in predictive performance, but also has great advantages in the feature selection process. e experimental results are shown as follows. Firstly, the proposed KFS-MCLOC has greater advantages in predictive accuracy, interpretability, and stability than other models. Secondly, the KFS-MCLOC selects 10 features from 53 original features and gives selected features their weight automatically. irdly, the features selected by the KFS-MCLOC are further compared with the features selected by the logistic regression model with stepwise parameter, and the comparison results show that the indicators of "quick ratio; net operating cash flow; abnormal times of water, electricity, and tax fee; overdue days of enterprises' loans; and mortgage and pledge status" are proved to be the most influencing risk factors. e remaining sections are structured as follows. Section 2 provides an introduction to credit risk assessment, feature selection, and sparse learning and classification models, with reviews and comparison of the related literatures. Section 3 proposes a new model-kernel feature selection-based multiple criteria linear optimization classifier (KFS-MCLOC). Section 4 presents the experimental design, including the Chinese small and microsized enterprises' credit risk indicator system, dataset description and preprocessing, parameter setting, and models' evaluation criteria, whilst Section 5 describes and analyzes the experimental results. Finally, Section 6 is devoted to the conclusions as well as future work. Literature Review and Related Works At present, the research studies on credit risk assessment of small and microsized enterprises in academia and practice mainly focus on two aspects: one is the design of the credit risk indicator system; the other is the construction of the credit risk prediction model. In this section, we review and discuss the credit risk indicator system, feature selection methods, and credit risk assessment models by using examples from the past literature. Credit Risk and Credit Risk Assessment. Credit risk assessment is emerging as an important concerning topic nowadays. Recently, it has played a more and more important role in assessing the credit worthiness of individuals and enterprises. Generally, the enterprises' credit risk refers to the risk associated with financing problems [3]. Credit risk can not only lead to creditors' economic losses, but also lead to business failure (or corporate distress or bankruptcy or corporate failure). erefore, how to avoid the credit risk crisis of small and microsized enterprises has been a main focus in the recent years. e purpose of enterprises' credit risk assessment is to distinguish enterprises from good ones to bad ones by various methods, which is essentially a binary classification problem. Recently, the binary classification problem has become the focus of research in the fields of statistics, machine learning, and optimization algorithms, and various methods including logistic regression, SVM, ANN, and MCLPC have been applied to solve this problem. Credit Risk Assessment Models 2.2.1. Statistical Models. Originally, credit risk was evaluated by experts' experience and then evolved into the 5Cs theory. With the development of statistical technology, statistical methods were applied to predict the enterprises' credit risk. In 1936, Fisher first established a discriminant analysis to discriminate between the two groups of applicants, which is a very classical statistical method [4]. Later on, Altman proposed a famous Z-score model, which is based on the discriminate analysis model [5]. After that, Orgler applied 2 Complexity linear regression into credit risk evaluation aiming at differentiating between "good" and "bad" credit applicants for commercial banks in practical credit-scoring applications [6]. However, linear regression has strict linear assumption and many other restrictions. Wiginton proposed the logistic regression model for bankruptcy prediction [7]. Since logistic regression has no linear requirement and is easy to understand and interpret, this method is widely used to solve credit risk assessment problems in real business practice. However, for most statistical methods, the shortcomings are obvious, such as they cannot deal effectively with high-dimensional data, they have some assumptions, and their computation time is too long. Artificial Intelligence Models. In the recent years, with the development of machine learning and the wide use of big data, more and more sophisticated intelligence approaches emerge which are widely applied to enterprises' credit risk prediction, such as neural networks [8][9][10], genetic algorithms [11,12], and decision trees [13][14][15][16]. e literature shows that the intelligent techniques performed better in credit risk assessment than traditional statistical methods [17], and artificial intelligence methods proved to have higher computation accuracy, less computation time, and lower computation cost. Nevertheless, the higher predictive accuracy of artificial intelligence models is often associated with lower interpretive power and longer training time. So, despite the advantages of using intelligent methods, there are still some challenges. For example, most artificial intelligence methods, such as ANN methods, are "black box" methods, whose output result cannot directly interpret the credit risk evaluation result. However, whether the results can be explained is of great importance in practice since most rejected credit applicants will ask for the reasons for refusal. Lu et al. believed that it is very important to determine the importance of each variable by decision rule generation tools before using a black box for prediction [18]. Optimization Models. Besides conventional statistical techniques and artificial intelligence techniques, more attention has been paid to the collaborative use of optimization methods and data mining methods. For example, the SVM, which was first proposed by Cortes and Vapnik, can achieve a higher generalization power and promising results relative to other classification techniques in credit risk modelling [19]. Subsequently, many researchers used improved SVM based on optimizing theory for enterprises' credit risk assessment. Yao et al. proposed a novel two-stage model which is based on the least square support vector machine, and the results showed that this model yields better performance than that of the other statistical models [20]. Moreover, similar to the idea of SVM, mathematical programming optimization techniques such as linear programming, quadratic programming, integer programming, and multicriteria linear programming, which are also based on the optimizing and data mining methods, are widely used in credit risk assessment. Meanwhile, the literature shows that the mathematical programming techniques are shown to have higher predictive accuracy and better explanatory power. In the early 20th century, Shi et al. proposed a compromise solution-based MCLPC model by using behavior analysis of credit cardholders [21]. Maldonado et al. proposed a mixed-integer linear programming model for simultaneous classification and feature selection. e experimental results showed the effectiveness of this method in terms of predictive performance [22]. In addition, some other researchers also solved the linear and nonlinear problems through optimization methods [23,24]. Feature Selection and Sparse Learning. Feature selection is a necessary step to select features from a large number of data when using classification algorithms to build credit risk assessment models because the quality of data will significantly affect the performance of almost all algorithms. In most cases, it is highly possible that the real-world data contain many irrelevant and redundant features, whereas an appropriate feature selection method can reduce high feature dimension and remove irrelevant features and redundant features [12]. Ala'raj and Abbod's experimental results proved that choosing an optimal subset of features can improve prediction accuracy of the classifiers when constructing the hybrid model [25]. Zhang et al. also indicated that feature selection process was of great importance in reducing the computation time and increasing predictive accuracy [26]. Generally, the two most commonly used feature selection methods are filter and wrapper [27]. However, these two methods can only improve the predictive accuracy, but cannot automatically find out the most important features. To improve the interpretation ability, some researchers proposed to solve this problem by the sparse method, among which zero-norm and one-norm are more typical data sparse methods. Generally, zero-norm regularization is considered as a good method in theory. However, since zero-norm is a nonconvex discontinuous function, the corresponding mathematical problems are difficult to solve and domain knowledge is needed to control the value of superparameters, so it is not suitable for large-scale high-dimensional data problems. However, because of its convexity, one-norm can be directly used in the case of sparse features. In general, these sparse methods are difficult to integrate into kernel functions of high-dimensional feature space. More recently, many researchers have studied the sparse method; for instance, Sun used sparse nonnegative matrix factorizations for reducing the data dimensionality, and the empirical results showed that the NMF-SVM model has the relatively good predictive performance [28]. Mei proposed a sparse coding with sparse dictionaries (K-SVD method) for enterprises' credit risk prediction, and the empirical results demonstrated that this method shows a superior predictive performance [17]. (MCLOC). For a binary classification problem and given dataset D � (x 1 , y 1 ), . . . , (x n , y n ) with a feature set Multiple Criteria Linear Optimization Classifier the class label y i where y i ∈ −1, 1 { }, d is the dimensionality of the input space, and n is the sample size. According to these research works [29][30][31], two measures can be used to make a separation between the positive class and the negative class for solving a two-class classification problem. One measure is the overlapping degree of deviation from the separating hyperplane, and another is the distance between input points and the separating hyperplane, respectively. us, in the case of linearly separable data, a multiple criteria linear optimization classifier (MCLOC) model can be denoted as is the distance where the input point x i departs from the decision hyperplane, the penalty constant C(C > 0) is used to tradeoff the overlapping degree n i�1 α i and the separation degree n i�1 β i , the weight vector w(w � (w 1 , . . . , w d ) T ) consists of the weights of different features, and the scalar b(b ∈ R) is the unrestricted variable. (1), the function ‖w‖ 2 2 regarding the weight vector w with a penalty factor D(D > 0), which determines the complexity of the classifier model, is also added to the objective function of the classification problem. Besides, the linear sum of errors is replaced by the squared sum. Multiple Criteria Quadratic Optimization Classifier (MCQOC). For the MCOC model in us, we get a multiple criteria quadratic optimization classifier (MCQOC) with the quadratic objective function and the linear constraints, which can be denoted as Based on the constraints in the MCOC model in (2), we can calculate the intercept b(b ∈ R), and the separating hyperplane regarding the weight vector w(w ∈ R d ) is defined as where x is any input point from the independent test set. us, for a new input point x, its class label y can be predicted by the following decision function: us, the input point x is classified as the positive class (y � 1) if w T x ≥ b. Otherwise, the input point x is classified as the negative one (y � −1). In general, there are three main characteristics that make the multiple criteria optimization classifier (MCOC) more popular than some other traditional classifier models. Firstly, either the principle of MCOC or the algorithm of MCOC is relatively easy to interpret in practice. Secondly, MCOC gains a perfect generalization power as well as an excellent classification accuracy rate since it can correctly find the best balance between minimizing the overlapping degree and maximizing the total distance departed from the boundary. irdly, it is shown that since it is very easy and simple to implement the MCOC classifier and adjust its parameters, the performance of the model can be improved considerably. Besides, for the multiclass classification problem, the above MCOC model can be changed into multiple one versus one and one versus the rest classifiers. e KFS-MCLOC Model. Nonlinear separable data often appear in the real business world, especially when classifying the credit status of SMEs. Traditionally, a basis function ϕ(·) can be used to transform the nonlinear problem into a linear problem by mapping the input points from the input space to the new high-dimensional feature space, where data are linearly separable. For any input point x j from the training set D, the weight vector w is expressed as a linear combination with respect to the instance coefficient vector λ(λ � (λ 1 , . . . , λ n ) T ) and class label y, and we have en, for any two input points x i and x j from the training set D, their dot product ϕ(x i ) T ϕ(x j ) with respect to the basis function ϕ(·) can be replaced with the kernel function K(x i , x j ). erefore, the separating hyperplane is reformulated as Here, for any two input points x i and x j from the training set D, For the purpose of dimensionality reduction, the onenorm of the feature kernel weight vector μ with the sparsity factor S can be introduced to the MCLOC model in (1). At the same time, the kernel feature selection is realized by applying the total kernel function to the MCLOC model in (1), so the kernel feature selection-based MCLOC (KFS-MCLOC) model can be written as 4 Complexity Owing to the discontinuity of the one-norm of the feature kernel weight vector μ in the objective function in the KFS-MCLOC model in (8), let |μ m | ≤ t m (t m ≥ 0), and we have the final KFS-MCLOC model with the form As shown in (9), the feature kernel weight μ m represents the importance of each feature. e larger the μ m , the higher the importance of the feature in the mth dimension. And then, the mth feature should be held in the feature space. Otherwise, the feature is redundant which can be removed from the feature space. In this way, the high-dimensional feature vector can be transformed to a low-dimensional feature vector by the feature kernel weight vector in the total kernel function, which can make it efficient for the model to make later calculation. For any input point x, its class label can be determined by the following decision function: where for any input point , based on the optimal feature kernel weight vector μ. Finally, for two input points x i and x j , the RBF kernel function regarding the mth feature is defined as where the parameter σ is specified by the user. Algorithmic Design of KFS-MCLOC. e overall process of the experimental design of KFS-MCLOC is shown in Figure 1. is process can be divided into several stages as follows: We will use seven performance criteria, including the total accuracy, F 1 score, MCC, KS score, AUC, type-I accuracy, and type-II accuracy, to evaluate six models' predictive performance Stage 6. Feature Importance Analysis. We will use importance analysis and the reduction rate to make an in-depth analysis of the selected features Empirical Design for Small and Microsized Enterprises' Credit Risk Assessment In this section, we use the proposed KFS-MCLOC and other classifiers for small and microsized enterprises' credit risk assessment, and this section includes four parts, that is, Chinese small and microsized enterprises' credit risk indicator system, data description and data preprocessing, parameter setting, and performance evaluation criteria. Design of Small and Microsized Enterprises' Credit Risk Indicator System. In this experiment, the dependent variable is defined by whether it is in credit risk status. If the enterprise is in credit risk status, then the indicator will be equal to 1; otherwise, the indicator will be equal to 0. Complexity 5 Meanwhile, as for independent variables, a total of 53 credit risk indicators are selected as initial input variables. Based on the characteristics of small and microsized enterprises, we construct a multidimensional and multilevel credit risk indicator system especially for Chinese small and microsized enterprises, and these variables are picked based not only on the financial experts' suggestions but also on the past classical and influential literature [32]. On the whole, the credit risk indicators in this paper can be broadly classified into six overall dimensions: basic information, enterprises' financial information, the actual controller's information, behavior information, supervision and evaluation information, and policy information. First of all, the basic information of the enterprise directly reflects the most basic situation of the enterprise and reflects the basic quality of the enterprise itself. In general, the higher the quality of the enterprise itself, the less the credit risk of the enterprise will be. Experience shows that the size and the ownership structure of the enterprises are very essential for credit risk assessment. Moreover, the industry to which the enterprise belongs also has a great influence on whether the enterprise will go bankrupt in the future. is is because some industries are largely affected by macroeconomic factors such as national policies, which may lead to unstable operating conditions to enterprise. e description of enterprises' basic information is shown in Table 1. In the second level, the financial information is an indispensable link in the construction of the credit risk indicator system. e literature shows that the financial information of the enterprise must be fully considered when constructing the enterprise credit evaluation index system. Hence, we select commonly used financial indicators (solvency, profitability, operating capacity, and growth capacity) to reflect the financial situation of the enterprise and select the cash flow-related indicators to reflect the capital turnover of the enterprise. e description of small and microsized enterprises' financial information is shown in Table 2. As for the third level, we focus on the information about small and microsized enterprises' actual controllers. It is of great importance since the management right and ownership of small and microsized enterprises are usually highly concentrated in the hands of the actual controller, which means that the actual controller has the absolute power to influence the decision-making and the future development direction of the enterprise. Simon found that managerial ability plays a much higher role in predicting credit rating performance than other indicators [33]. Karabag found the great impact of top management actions and the quality of human resources about enterprises' failures [34]. In terms of the existing literature, we find that some indicators are frequently chosen, for instance, marital status, educational background, age, gender, daily behavior, and their credit status. In this paper, the actual controller's information includes the basic information, behaviour information, and supervision information. e description of the actual controller's information is shown in Table 3. In terms of the fourth level, we focus on the small and microsized enterprises' behaviour information. Under the "Internet Plus" and "Big Data Era," enterprises' behaviour records are increasingly being concerned by the government, financial institutions, and counterparts. Since the influence of enterprises' behaviour becomes more and more extensive, researchers pay more attention to the enterprises' daily behaviour. In fact, nowadays, the relationship between the expected behaviour of enterprises and their past social relations as well as their past behaviour rules is far more important than that of their financial information. For instance, Wang et al. proved that besides conventional hard information, soft information like behaviour information also enters into the lending decision process [35]. erefore, we add the behaviour information to supplement the content of the credit risk indicator system. e description of behaviour information is shown in Table 4. As for the fifth level, since supervision information reflects the past production, operation, and credit status of the enterprise, it can help the government to better supervise enterprises and help banks to better prevent the default risk. erefore, supervision information is necessary. e Figure 1: e overall process of the KFS-MCLOC algorithm for small and microsized enterprises' credit risk assessment. 6 Complexity description of supervision and evaluation information is shown in Table 5. Finally, in the macrolevel, policy information can largely influence the future development direction of the enterprises. In general, enterprises which are in line with national industrial policies or bank's own policy are more likely to get credit support from banks; on the contrary, banks will be more cautious. e basic description of policy information is shown in Table 6. Solvency Asset-liability ratio X 8 Current ratio X 9 Quick ratio X 10 Interest coverage ratio X 11 Profitability Operating cash flow liability ratio X 12 Return on total asset X 13 Gross profit margin X 14 Return on sales X 15 Return on total assets X 16 Return on equity X 17 Operating capacity Inventory turnover ratio X 18 Account receivable turnover X 19 Total asset turnover X 20 Growth capacity Sales growth rate X 21 Profit growth rate X 22 e growth rate of total assets X 23 Capital accumulation rate X 24 Cash flow information Net cash flow from operating activities X 25 Net cash flow e data we use in this paper come from a Chinese commercial bank. is dataset comprises 188 small and microsized enterprises, in which 130 enterprises are regarded as "not in credit risk status (normal)" while 58 of them are regarded as "in credit risk status (default)." e enterprises we choose cover various industries, such as industrial enterprises, agricultural enterprises, and marine enterprises. Moreover, the nature of enterprises also covers many types, such as private, joint venture, and foreign capital. In terms of data type, this dataset includes various types of variables, such as binary data (male/female), character data (bank's historical credit rating), and numerical data (financial ratios). erefore, the enterprises' data chosen in this paper are relatively representative. Data Preprocessing. In reality, irrelevant and redundant features will not only reduce the predictive performance of a classification model but also increase the computational complexity [36]. Generally, data will be firstly preprocessed before going to the model by converting the initial data to standard form data. In this paper, the "Binning" and "min-max normalization" are used as data preprocessing methods. e process of using the min-max normalization method is shown as follows: where X min represents the minimum value of X and X max represents the maximum value of X. In order to ensure the typicality of sample data extraction, we adopt the stratified random sampling method to collect samples from the training set. rough the stratified random sampling method, 70 normal enterprises and 30 default enterprises are selected from the total samples as training samples. e remaining 60 normal enterprises and 28 default enterprises are used as test samples. e distribution of the sample is shown in Table 7. In addition, Stone proved that the cross-validation method is an effective method to test the strength of the predictive power of models [37]. Since the K-fold crossvalidation is applied due to its properties of being simple, easy, and using all data for training and validation [38], we use a five-fold cross-validation for each of the aforementioned classifiers on the training subset in each iterative process. After that, we can obtain the final predictive accuracy by using the average of the five groups' results. Furthermore, an iterative process on the predefined Whether the guarantee enterprise is abnormal Yes 1, no 0 Whether the enterprise complies with the preferential policies of the bank Yes 1, no 0 X 53 Whether the enterprise conforms to the industry preferential policies Yes 1, no 0 8 Complexity parametric set will be used to find the best parameter. At last, the predictive performance of each classifier will be reported for comparison in order to get the best classification accuracy. Parameter Setting. Based on the given training set, we employ the 5-fold cross-validation method to train KFS-MCLOC, SVM, neural networks, MCLOC, and MCQOC, and then they are tested by using the independent test. In the process of training these classifiers, some discrete sets corresponding with different parameters are predefined. As for neural networks, we define a two-dimensional grid corresponding with various numbers of hidden layers and nodes in each hidden layer to search the optimal network structure. at is, the number of hidden layers took values from 1 to 3, while the number of nodes in each hidden layer varies from 10 to 50 with the stride 2. For the SVM classifier, the optimal penalty factor C is selected from the set {1, 2, 5, Performance Evaluation Criteria. In this paper, we use seven criteria to evaluate the performance of six classifiers, which includes total accuracy, type-I accuracy, type-II accuracy, F 1 score, MCC, and KS score. Total Accuracy. Total accuracy is one of the most popular performance evaluation criteria which is defined as the correct predicted samples divided by the total sample, and it is computed as where TP refers to the number of the correct classification of the good creditor as the good creditor; FN refers to the number of the wrong classification of the poor creditor as the good creditor; TN refers to the number of the correct classification of the poor creditor as the poor creditor; and FP refers to the number of the wrong classification of the good creditor as the poor creditor. Type-I Accuracy or Type-I Error. Type-I error is known as the false negative rate, and type-I accuracy is known as the true positive rate, which are, respectively, computed as Type-II Accuracy or Type-II Error. Type-II error is known as the false positive rate, and type-II accuracy is known as the true negative rate, which are, respectively, computed as 4.4.4. F 1 Score. F 1 score is commonly used to measure the predictive accuracy of the binary classification model in statistics, and it is computed as Matthew's Correlation Coefficient (MCC) . MCC is usually used to judge the correlation relationship between two groups of data, and it is computed as 4.4.6. Kolmogorov-Smirnov Score (KS Score). KS score is widely used in evaluating the discriminatory ability of the model between positive and negative samples because it is sensitive to the difference of position and shape between two kinds of empirical cumulative distribution functions. e higher the KS score, the better the performance of the model. e KS score is computed as 4.4.7. AUC. AUC is regarded as a widely used measure for model performance evaluation. As a numerical value which ranges from 0 to 1, AUC can evaluate the classifier intuitively. e evaluation criterion is that the larger the AUC value, the better the evaluation performance of the model. Optimal Parameter Setting. In this paper, we study the effect of different parameters on the classification performance of SVM, MCLOC, MCQOC, KFS-MCLOC, and ANN, respectively. By running the grid search and 5-fold cross-validation against the training data, the best parameters are found for SVM with C � 100 and σ � 10, MCLOC with C � 5, and MCQOC with C � 10. As for KFS-MCLOC, its optimal parameters are, respectively, 20 for the shrinkage factor S for feature selection, 200 for the penalty factor C, and 0.2 for the bandwidth σ of the RBF kernel function. For the neural network classifier, the optimal network topological structure is composed of 1 hidden layer with 44 nodes in addition to input and output layers. Model Evaluation of Predictive Results. As for Chinese small and microsized enterprises' credit risk assessment, we use the five-fold cross-validation method to train the training subsets of the proposed KFS-MCLOC and the other five models, respectively, and then provide each predictive result. In this paper, we totally use seven evaluation criteria, including total accuracy (TA), type-I accuracy, type-II accuracy, F 1 score, MCC, KS score, and AUC to measure the performance of six models. As shown in Table 8, we can find that the performance of KFS-MCLOC is significantly better than that of the other classifiers. In order to compare the predictive ability of each model more clearly, we draw an ROC curve. Figure 2 shows that the ROC curve of KFS-MCLOC is on the far left, which means that it is far better than MCLOC and MCQOC. is indicates that the predictive performance of the MCLOC can be largely improved by introducing the one-norm kernel feature selection. In addition, the ROC curves of neural networks and SVM almost overlap, and both are slightly better than logistic regression in most cases. Moreover, compared with the ROC curve, since AUC has a specific value, it can intuitively show the classification performance of the model. erefore, we make a supplementary comparison of the results through AUC. As shown in Table 8, the AUC of KFS-MCLOC is the highest, which is 0.93, and MCLOC performed the worst, which is 0.79. In addition, experience shows that when the negative sample has a greater influence on the judgment of results, the KS score can better reflect the distinguishing ability of the model than AUC. Figure 3 shows the comparison of KS score among six classifiers. We can see in Table 8 that the KS score of KFS-MCLOC is also the largest (0.85), while the KS score of logistic regression is the lowest (0.61). is result fully demonstrates that unbalanced data have a negative impact on the classification performance of traditional classifiers, which also proves the conclusion of Galar et al.'s research, in which they said that when faced with unbalanced distribution of classes, the performance of traditional classifiers was often disappointing [39]. However, in real business practice, the banks are more concentrated on the type-II error since the cost of the type-II error is much higher than the cost of the type-I error. Table 9 shows the type-I accuracy, type-I error, type-II error accuracy, and type-II error of the six models. In order to display the results more intuitively, we draw the histograms of type-I accuracy and type-II accuracy of six classifiers, respectively (Figure 4). From the comparison results of type-I accuracy, SVM performs the best (94.64%). However, from the comparison results of type-II accuracy, KFS-MCLOC performs the best (98.57%), followed by logistic regression (85.71%); and the worst is MCLOC (68.57%), which is almost 30% lower than KFS-MCLOC. Complexity In conclusion, there are three main findings: firstly, in the comparison of the results of total accuracy, type-II accuracy, AUC, KS score, F 1 score, and MCC, KFS-MCLOC's performance is significantly better than that of the other models, i.e., logistic regression, SVM, neural networks, MCLOC, and MCQOC, except for type-I accuracy, in which KFS-MCLOC's performance is slightly worse than that of SVM. Secondly, SVM has almost the second-best predictive performance among the six models. irdly, the predictive performance of MCLOC is the worst, which fully shows that it is very necessary to improve the algorithm of MCLOC. Reduction Rate Analysis. Generally, the purpose of dimension reduction or attribute reduction is to solve the problem of too many features in the original feature set. Obviously, the computational complexity of the model will be greatly reduced and the calculation efficiency of the model will be greatly improved, by reducing the number of original features in the feature space. Furthermore, a proper feature reduction can ensure that the final predictive results from the reduced data are basically consistent with the predictive results obtained from the original data set. In this paper, the sparse kernel method is used to filter the original feature set, which can reduce the attributes and ensure the accuracy of the predictive results simultaneously. Among them, the reduction rate is an important indicator, i.e., reduction rate � total features − used features total features × 100%. In general, the less the number of features in credit risk indicators is, the more the explainable power of small and microsized enterprises' credit risk assessment will be. e comparison of reduction rates of six classifiers is listed in Table 10. Normally, the fewer the selected features, the higher the reduction rate and the stronger the explanatory power of the selected features. As shown in Table 10, KFS-MCLOC selects 10 most important features from 53 original features, with a reduction rate of 81.13%; while other models (logistic regression, SVM, neural networks, MCLOC, and MCQOC) are with a reduction rate of 0%. In order to make a more specific analysis, we combine and classify the name, category, positive/negative impact, and weight proportion of the features selected by KFS-MCLOC, which is shown in Table 11 in detail. Specifically, from the perspective of the categories of the selected features, they cover the financial information, basic information of the actual controller's information, behavioural information, and supervision information. Among them, the number of features belonging to behavioural information is the largest, accounting for more than 50% of all indicators. In terms of contribution degree, the sum of the three features ("abnormal times of water, electricity, and tax in the past 12 months," "net operation cash flow," and "mortgage and pledge status") importance weights exceeds 50% and plays a decisive role. Among which, two features belong to behaviour information and one feature belongs to financial information. As for the influence direction of selected features, we can see that there are 5 selected features that have positive influence and five selected features that have negative influence. Further Verification of Selected Features. In order to further verify the rationality of the important features Table 12. As we can see from Table 12, according to the categories of selected features, most of the selected features belong to small and microsized enterprises' behaviour information, which again proves that the behaviour information plays an important role in SME's credit risk assessment. In addition, Tables 11 and 12, we make a more in-depth discussion on the indicators selected by both KFS-MCLOC and logistic regression with stepwise parameters, as follows. Firstly, "quick ratio" is considered to be the ability of quick assets to pay current liabilities and is an important index to measure the short-term solvency of enterprises. e higher the "quick ratio" is, the stronger the short-term solvency is and the less likely the enterprise is to have credit risk. Secondly, "net operating cash flow" is proved to be one of the most important factors, and its importance weight is relatively high. is is probably because small and microsized enterprises have a small scale of assets and a single type of production and operation, which lessen their external financing channels. Once they face short-term financing pressure or fall into production difficulties, it is usually difficult to get the support of external funds. In this case, the internal capital generated by the production and operation of small and microsized enterprises is particularly important. erefore, through operating cash flow, we can analyze the rationality of enterprise capital operation and judge the ability of enterprise to repay loans, and thereby assess the enterprises' credit risk status. irdly, "abnormal times of water, electricity, and tax fees in the past 12 months" can well illustrate the daily operating status of enterprises' credit risk. Because, if the enterprise cannot pay the water, electricity, and tax fees on time, it means that there may be a high probability of problems in the operation of the enterprise at this stage. erefore, it can be preliminarily judged that the probability of enterprises' credit risk will increase, which needs to be paid attention. Fourthly, "overdue days of enterprises' loans" is a very important signal to judge small and microsized enterprises' risk status. e longer the overdue days of enterprises' loans are, the greater the risk of operation of the enterprise is, the less the cash flow is, and the greater the credit risk of the enterprise is. Moreover, "overdue days of enterprises' loans" is the precursor index for the determination of the high risk of enterprises. Finally, "mortgage and pledge status" is also a very significant indicator. In general, the higher the value of the mortgage, the less the risk the enterprise will have. At present, since the financial information and operation information between banks and small and microsized enterprises are seriously asymmetric, the "mortgage and pledge status" is shown to be a particularly important factor. By increasing mortgage and pledge, credit risk can be effectively mitigated, thus reducing the occurrence of nonperforming loans. Conclusions Credit risk assessment has always been an important research topic in the fields of accounting, finance, and business. At the same time, it has become a hot research field of statistical learning, artificial intelligence, and optimization algorithm in the recent years. Nowadays, enterprises' credit risk analysis is gradually forming its own theoretical system and research framework. A good credit risk assessment model for enterprises has important practical significance on improving the awareness of credit risk, preventing the credit crisis, and avoiding the bankruptcy liquidation. Based on the credit data of small and microsized enterprises of a Chinese commercial bank, we design a credit risk indicator system, especially for small and microsized enterprises, including basic information, financial information, actual controllers' information, behaviour information, supervision information, and policy information. As for model construction, we improve the MCLOC by introducing the one-norm kernel feature selection and thereby establish the KFS-MCLOC. In order to test the effectiveness of the KFS-MCLOC, we use total accuracy, F 1 score, MCC, KS score, and AUC to compare models' predictive performance. e empirical result shows that the KFS-MCLOC Variable Variable name Category Coefficient p value X 9 Quick ratio Enterprises' financial information −0.3168 0.0014 X 20 Interest coverage ratio Enterprises' financial information −0.2924 0.0076 X 24 Operating net cash flow Enterprises' financial information −0.2401 0.0198 X 32 Accumulated overdue repayment of actual controller Actual controller's information −0.3426 ≤ 0.001 X 39 Abnormal times of water, electricity, and tax in the past 12 model performs better than the other models in almost all aspects by using a real-world credit dataset from a Chinese commercial bank. Secondly, the KFS-MCLOC selects 10 features from 53 original features and gives selected features their weight automatically. irdly, the features selected by KFS-MCLOC are further verified and compared by the features selected by logistic regression with stepwise parameter, and the indicators of "quick ratio; net operating cash flow; enterprises' abnormal times of water, electricity, and taxes fee; overdue days of enterprises' loans; and mortgage and pledge status" are proved to be the most influencing credit risk factors. is finding is meaningful for banks and regulatory institutions because these key indicators can be regarded as important credit risk factors and should be paid more attention in practice in the future. In theory, this study provides a useful idea and reference for enriching and developing the credit risk indicator system for Chinese small and microsized enterprises. In practice, this paper also has practical contribution since the effectiveness of the KFS-MCLOC model has been validated by a realworld credit dataset from a Chinese commercial bank. Contribution. In general, the contributions of this paper are as follows. Firstly, we construct a comprehensive multidimensional credit risk indicator system especially for small and microsized enterprises by adding enterprises' behaviour information, supervision information, and policy information. Secondly, we test the evaluation performance of the model based on the real credit dataset of Chinese small and microsized enterprises. e empirical results show that the KFS-MCLOC model has great advantages in predictive accuracy and stability, which means that the model is suitable for evaluating the credit risk of small and microsized enterprises in the real business world. irdly, in the financial field, all credit decisions are required to be interpretable. e proposed KFS-MCLOC model can automatically select the most important indicators and determine their importance weights, which is very effective in solving the "black box" problem, so as to help the credit personnel make effective understandable and traceable decisions. Limitations and Future Works. In this section, we conclude the limitations of the proposed model, and put forward the corresponding future works. At first, since the sample size in this paper is relatively small, in the future, a large dataset with a more complex data structure should be explored to further validate the proposed model. In addition, dynamic data changing is a relatively new research problem, and more attention should be paid to it in the future. Finally, although the KFS-MCLOC is shown to be relatively effective in small and microsized enterprises' credit risk assessment, "expert technology" could also be added to the model for a higher predictive accuracy. Data Availability e data used to support the findings of this study are available from the corresponding author upon request.
2020-06-11T09:05:43.881Z
2020-06-08T00:00:00.000
{ "year": 2020, "sha1": "64d0cd757f7e5de0c5f5150f2f87443c487c6bd3", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1155/2020/2394948", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "99764c4d047f25a9edb346350f9fc5c8161d5de3", "s2fieldsofstudy": [ "Business", "Economics", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
134614296
pes2o/s2orc
v3-fos-license
The Rise of Soju 燒酒 : The Transfer of Distillation Technology from “ China ” to Korea during the Mongol Period ( 1206 – 1368 ) * Since 2012, Paul Buell has examined the key role that the Mongols played, during the age of their massive empire (early 13th to late 14th century), in improving on a pre-existing Chinese technology through producing an easily portable apparatus for distillation. He argues that the Mongols, at a key early stage in the history of globalization, did two important things: not only did they disseminate this improved technology widely, but they also created an environment in which a variety of cultures could produce their own distilled liquors using local ingredients.1 I adopted this theme in turn and applied it to Korea, in order to Introduction Since 2012, Paul Buell has examined the key role that the Mongols played, during the age of their massive empire (early 13th to late 14th century), in improving on a pre-existing Chinese technology through producing an easily portable apparatus for distillation.He argues that the Mongols, at a key early stage in the history of globalization, did two important things: not only did they disseminate this improved technology widely, but they also created an environment in which a variety of cultures could produce their own distilled liquors using local ingredients. 1I adopted this theme in turn and applied it to Korea, in order to develop a case study that illustrates what happened with distillation technology in Korea during Mongol times. 2 Many individual pieces of evidence hint at the introduction of this technology from China to Korea through the Mongols.In the process of this transfer, however, Koreans promoted a new type of distilled alcohol, made with local ingredients in China and Korea mixed with fermented rice rather than mare's (or today cow's) milk as used by the Mongols. 3This sparked the development of soju, Korea's national alcoholic drink, which has now become one of the world's most popular drinks. 4ike many other distilled alcoholic drinks appearing at the time, soju was often called arakhi 5 by the Koreans, who adopted the 'Arabic' word for brandy popularized by the Mongols (although not all arkhi / arakhi drinks then were brandies).While soju, as a foreign drink, stood out among traditional alcoholic drinks and was quickly popularized beginning in the late Koryŏ 高 麗 period (918-1392), tracing its origins and process of popularization has proven difficult.The several scattered sources relating to the transfer of distilled wine technology do not provide entirely consistent information, which has led to debates over differing theories in the past as well as present.This has led to the development of two major theories about how distilled alcohol technology transferred to Korea: The Mongol-period origin theory and the pre-Mongol period origin theory. Although early studies of distillation technology in Korea have helped to build an important foundation for further study, they face two major limits.First, they have focused mostly on documentary sources, merely recycled by that he presented at the workshop entitled "Scientific Transfer in Mongol Eurasia" at the Hebrew University of Jerusalem, Israel, June 10-11, 2015. 2 This project was inspired by the larger project entitled "A Comparative Investigation of Distillation Technologies, Wine Production and Fermented Products," a collected study of the topic undertaken with several co-workers from different academic institutions including those in Germany, Mongolia, and Mexico over the course of three years (2014-2017).Its temporary title is "The Story of Soju: Distillation in Mongol Korea and its Eurasian Roots and Global Context." 3 I gave the presentations entitled "Traditions and Change: Distillation Technology and Mongol Korea" and "The creation of Soju: Transfer of Distillation Technology from the Yuan China to Koryŏ Korea" at the aforementioned conferences respectively in Salzburg and Paris in 2015.4 For example, see Dreisbach 2013.5 This word occurs in our sources in many forms.Some, e. g., alaji 阿剌吉, Mongolian araji, shows a palatalized Turkic intermediation.This is the form found in the Yinshan zhengyao (3.6B) of 1330, which was probably written with the active participation of Uighurs. later studies. 6Recently, however, new evidence from archaeological and anthropological studies has shed considerably fresh light on discussions about Eurasian distillation transfer generally, which suggests that the time has come to revise the standard approaches to the history of distillation technology in Korea in cross-disciplinary ways, in order to take advantage of these new sources. 7Second, none of the various arguments in existing theories deny that the sudden popularity of distilled alcoholic drinks in Korea happened during the Mongol period.Despite this fact, few of the early studies have made the connection between the popularization of soju as a 'new' beverage and the unprecedented level of cross-cultural contact that existed throughout thirteenth-and fourteenth-century Asia.Mongol rulers actively promoted the transfer of this distillation technology along with a host of other kinds of technologies, ideas and goods.It is important, then, that new studies reflect a global context.I aim to produce an academic monograph that examines in detail the ways in which the transfer of distillation technology from Yuan 元 China to Koryŏ Korea took place and traces the influences present in the processes involved.As a political suzerain state for nearly 100 years, the Mongol empire was able to exert a major cultural influence on Korea, not only in the area of distilled alcohols such as soju but also in terms of many foods and other cultural items, including dresses.While developing this idea for the proposed monograph, I have presented and discussed some important pieces of evidence about Mongol-era distillation technology transfer from China to Korea at three academic conferences.The present paper offers an expanded version of those presentations and will serve as a foundation for the forthcoming monograph.It will first examine the basic characteristics of Korea's traditional alcoholic drinks in order to distinguish soju from other alcoholic drinks consumed in Korea, including those of foreign origins that existed before the appearance of soju in Korea.After that, the paper will critically review specific transfer vectors based on earlier sources and then add new archaeological findings to the conventional documentary sources.Finally, it will expand the historical context that facilitated the rapid rise of soju at the end of the Mongol (and late Koryŏ) period, in order to consider larger patterns such as the distribution of Mongol army camps and international trade. Traditional Alcoholic Drinks in Korea before the Rise of Soju in the Mongol (Late Koryŏ) Period There were several different kinds of alcoholic drinks consumed in Korea prior to the Mongol period, that is, in the late Koryŏ period.According to scattered written sources (both in Chinese and in Korean), Koreans had produced and consumed alcoholic drinks since ancient times.Many historical accounts as well as legends and folktales hint at the fact that that alcoholic drinks were also an important part of the lives of Koreans, who enjoyed and used them on various occasions such as festival days. 8The sources prior to the Koryŏ period, however, lack details regarding what kinds of alcoholic drinks Koreans consumed in early times.Scholars have assumed that, as ancient Korea was an agrarian society, alcoholic drinks consumed were most often a turbid kind of unstrained wine (called t'akchu 濁 酒 in Korean) made from fermented grains.This is not very difficult to make. 9Such drinks might have been similar to today's makkŏlli, another popular Korean alcoholic beverage.It is milky and sweet and is made from rice or other kinds of Korean grains.Makkŏlli contains about 6-8% alcohol by volume.It forms one of three major traditional liquors commonly drunk in Korea, along with ch'ŏngju 淸酒(clear strained wine) and soju (distilled). Sources also hint that different kinds of Chinese alcoholic drinks were also transferred to Korea.For example, the best preserved ancient Chinese agricultural text, Qimin yaoshu 齊民要術(Essential Techniques for the Welfare of the People, ca.533-544), written by the Northern Wei Dynasty (386-534) official Jia Sixie 賈思勰, was probably introduced to the Korean kingdoms through contacts with Chinese dynasties and might have influenced Korean traditional wine making over time. 10Some sources suggest that the Koreans developed distinctively good liquors that could compete with the best that China had to offer.For example, two Korean accounts, Chibong yusŏl 芝 峰 類 說 (Topical Discourses of Chibong, 1614) by Yi Sugwang 李晬光 (1563-1628) and Haedong yeogsa 海東繹史 (Unraveling the History of Korea, 1823) by Han Ch'iyun 韓致淵(1765-1814), introduce a poem by the Tang 唐 (618-907) dynasty poet Li Shangyin 李商隱 (c. 813-858), which includes the following lines: "I am afraid that the aroma of a glass of Silla wine will go away with the wind at 8 Yi Sŏngu 1984, 197f, 201ff.9 Yi Sŏngu 1984, 10; Pae Kyŭng-Hwa 1999, 5. 10 Yi Sŏngu 1984, 198.dawn."This same poem appears in Li Shangyin's poetry collection as a poem entitled "Young Nobleman" (Gongzi 公子). 11illa 新羅was one of three Korean kingdoms that unified the southern and middle parts of the Korean peninsula in 668 by allying with the Tang dynasty.It maintained close contacts and carried on exchanges with China, which probably affected the development of Korea's unique liquor making methods through synthesizing and improving earlier traditional and borrowed brewing techniques. We have a more varied source material on Korean liquors for the Koryŏ (918-1392), a dynasty that replaced the United Silla and lasted for almost four centuries.From this period, many literary works including poems specifically mention different kinds of alcoholic drinks consumed by Koreans.The earliest surviving source about Koryŏ liquors is the Xuanhe fengshi Gaoli tujing 宣和奉使高麗圖經 (Illustrated Account of an Official Mission to Koryŏ during the Xuanhe Reign, 1119-1125) by Xu Jing 徐兢 (1091-1153).Xu was a Chinese envoy dispatched to Koryŏ by the Song emperor Huizong 徽 宗 (r.1100-1126) to conduct a special diplomatic mission to grasp the situation in Koryŏ and request the kingdom's military support for the Song war against the Jurchens (Chin.Ruzhen 女 眞 ).Xu Jing's exceptional account, full of details about the political system and culture of Koryŏ society from the perspectives of outsiders, includes a brief discussion of the alcoholic drinks he saw being con-sumed in Korea.12Xu says that, because the people of Koryŏ loved their liquors, they drank many cups, and even went to several drinking parties a night.It seems that, unlike the earlier Tang-dynasty Chinese who praised a distinctive Silla liquor, Xu did not like the Koryŏ wine he tasted, as he emphasized several times in his writings.We learn from his account that Koreans used nonglutinous [regular] rice and malt [yeast] to brew alcoholic beverages.They did not have sticky rice.Xu says this is why Korean liquors were not as good as contemporary Chinese liquors made with sticky rice.The fact that liquors were fermented with a type of yeast known as nuruk shows that common Korean alcoholic drinks popular at that time were probably all based on grains. 13nother important characteristic of Koryŏ liquors, Xu explains is that those consumed by kings and nobles were of higher quality, comprised of ch'ŏngju (clear strained liquor) and another named pŏpchu 法 酒 (meaning "Recipe" liq-uor, a kind of ch'ŏngju).Both wines were brewed by the Yangonsŏ 良醞署, the government office that presented alcoholic drinks to the court during the Koryŏ period.By contrast, ordinary people could not find these kinds of fine liquors and instead drank liquors that were thickly colored and tasted turbid. 14This shows that by this time Korean alcoholic drinks were already divided into two classes: those consumed by nobles and those consumed by ordinary people.This big division of liquor types in the Koryŏ period, as attested by Xu Jing, is also expressed in the Koryŏ sa 高麗史(History of Koryŏ, 1454), the principal surviving history of Korea's Koryŏ dynasty.It was composed nearly a century after the fall of Koryŏ during the reign of King Sejong 世 宗 (r.1418-1450) of the Chosŏn dynasty.The text shows that the government office for alcoholic drinks (Yangonsŏ) was established during the reign of King Munjong 文 宗 (r.1046-1083) in order to brew high-quality alcoholic beverages -namely, the noble drinks ch'ŏngju and pŏpchu -to use in state ceremonies. 15We learn from Chinese and Korean sources that ch'ŏngju was made by condensing fermented yet unstrained base liquor.Pŏpchu, a kind of clear strained wine, was brewed using a rich base composed of certain proportions of raw ingredients.It was also called ŏju 御酒 (royal liquor) or kwanju 官酒 (official liquor), the kind of liquor kings conferred on their officials.Its compression brewing method was probably transferred from China to Korea to be used by kings and nobles and for ancestral memorial ceremonies at the Royal Ancestors' Shrine.The unstrained, turbid liquor called t'akchu 濁酒and described by Xu Jing as a wine for ordinary people is also testified to by other contemporaneous sources of the Koryŏ period.For example, scholars' poems mention white liquors such as t'akchu and paekchu 白酒 (white liquor), as well as a pakchu 薄 酒 (light wine), which was found con-sumed in the fields and drunk by travelers. 16All of these had a weaker taste (smaller alcoholic content) and were dark in color.It was easy to make them, too.The Chinese envoy who had a chance for a quick look at Korean culture for a few months aptly grasped the essential characteristics of the major kinds of liquors consumed commonly by different classes in Korea in the twelfth century. We can find more details about liquors enjoyed in Korea in sources for the second-half of the Koryŏ period.There was even a special genre of literature that personified wines to satirize the times and offer lessons for life.These are the Kuk Sŏnsaeng chŏn 麴先生傳(Biography of Mr. Kuk) by Yi Kyubo 李奎報 (1168-1241) and the Kuk Sun chŏn 麴 醇 傳 (Biography of Kuk Sun) by Im Ch'un 林 椿 (fl.late twelfth century).17All of the characters in the two works, including Mr. Kuk, represent liquors of different kinds here personified.The kinds of liquors that appear in this particular literature, i. e. biographies of wines, are mostly based on fermented grains.Therefore, we can assume that grains were the primary ingredients in fermentation for the liquors commonly consumed. 18hile mid-Koryŏ sources attest to the strong period preference for consuming grain-fermented wines, more literary sources reveal that a variety of liquor types existed in the mid-to the late Koryŏ period.For example, Yi Kyubo, a low-ranking official who particularly enjoyed drinking liquors and authored the above-mentioned Kuk Sŏnsaeng chŏn in the thirteenth century, also wrote about different kinds of liquors in his poems. 19These include ihwaju 梨花酒 (pear-blossom liquor), jaju 煮酒 (boiled liquor, distilled liquor?), hwaju 花酒 (flower liquor), ch'ohwaju 椒花酒 (Sichuan pepper liquor), p'ap'aju 波把酒 (wave liquor), baegju 白酒 (white liquor), bangmunju 方文酒 (liquor brewed according to recipe), chunju 春 酒 (spring liquor), cheonil ju 千 日 酒(thousand-day liquor), cheongeumju 千金酒 (liquor [brewed using the bark of a] cheongeum [tree]), and nogpaju 綠波酒 (green wave-like clear liquor).There are many other names of liquors mentioned in other literary works as well.Counting all of these, there were more than 25 different liquors enjoyed by the Koreans.Later sources show that most of these kinds of alcoholic drinks con-tinued to be consumed in the Chosŏn period. 20Here, we should note that many sources suggest that there were various kinds of alcoholic drinks other than grain-fermented liquors, one that did not receive the attention of the Chi-nese traveler of the twelfth century. 21mong the liquors of Koryŏ, some were quite distinctive.Chang Chihyŏn 張智鉉 (*1928) has provided the most complete analysis of available documentary sources.He argues that various distinctive alcoholic drinks were imported from China and its northern dynasties, including those of the Khitans, Jurchens, and Mongols, as diplomatic gifts and commercial goods. 22ased on his analyses of the passages on diplomatic relations recorded in the Koryŏ sa and other documents, Chang Chihyŏn suggested possible dates for the importation of certain liquors from China to Koryŏ.Here is a list of names of liquors of foreign origin and possible dates of their transfer to Koryŏ based on Chang's analysis: 23 Ch'ungnyŏl 忠烈(r.1274-1308) sangjonju 上 尊 酒 (supreme liquor): December, first year of the reign of King Ch'ungsŏn 忠宣(r.1213-1259) baegju 白酒 (white liquor): August, 28th year of the reign of King Ch'ungnyŏl jungsanju 中山酒(Zhongshan liquor): Mid-Koryŏ period jeunglyuju 蒸溜酒(distilled liquor): Late-Koryŏ period Among these, particularly noticeable besides the distilled liquors are liquors that were based on ingredients other than grains, including podoju (grape wine), yangju 羊 酒 (sheep's milk liquor), and mayuju (mare's milk liquor).Evidence for sheep's milk liquor is found in a reference to king Yejong.He presented it to a Koryŏ general in honor of his achievement in conquering the Jurchens in 1107.The Koryŏ sa does not explicitly discuss its origins; however, Chang Chihyŏn argues that the Koreans had probably received it from the Khitans or Jurchens through official trade before this particular gift-giving event, because sheep's liquor was a typical liquor of nomads and did not de-velop in agricultural societies.Our sources also clearly document mare's milk wine and grape wine as offered to the Koryŏ court by the Mongols.According to Chang Chihyŏn, Tonglao 潼 酪 was a nickname for an alcoholic drink based on mare's milk drunk commonly by northern nomads, which was also called ma tonglao 馬潼酪. 24 appears in the Koryŏ sa chŏryo 高麗史節要 (Essentials of Koryŏ History) as part of a 1231 offering by a Mongol general to Koryŏ king Kojong following the initialization of diplomatic negotiations between the two. 25At that time, the Mongols were invading and devastating virtually the entire Korean peninsula while the Koryŏ government resisted the Mongols from the small island of Kanghwa on the west coast.The Yuan emperor also offered grape wine to the Koryŏ king Ch'ungnyŏl as a gift in 1302 and 1308. 26As seen in the sources, most of these foreign liquors seem to have been used exclusively by kings and nobles receiving special royal gifts, and were not shared by ordinary people. Among these foreign alcoholic drinks, one kind did spread through the ranks of the ordinary people rapidly: distilled liquors, called aralgil 阿剌吉, the Chinese alaji, representing the Turkic form araji (e. g., Mongolian arkhi), grape wine and then soju.As we have seen above, because the liquor is not found in Yi Kyubo's writings of the mid-Koryŏ period, it most likely was not popular until the mid-thirteenth century.Yet several pieces of late-Koryŏ period sources hint at the fact that it had become quite popular at local levels by that time, and later sources even demonstrate that it had become one of the most important liquors in the Chosŏn period that followed.While the way how other foreign wines were transferred can be traced quite clearly in documentary sources (as seen in the example of mayuju discussed above), relevant sources on distilled liquors provide only inconsistent information about their origins, resulting in several different theories.The following section will examine the ways in which distilled alcohols were transferred from China to Korea by reviewing some of the major points of existing theories as well as examining broader ranges of sources and studies, including the most recent works on the history of distillation and new evidence from archaeological and anthropological findings. Arakhi and Soju: Introduction and Popularization of Distilled Alcohols in Korea during the Late Koryŏ Period Three passages from the Koryŏ-period sources suggest the popularization of distilled alcohol, specifically soju and arakhi, in the late Koryŏ period.Two of them, both in the Koryŏ sa, explicitly mention soju.The first of these passages, in the biography of Ch'oe Yŏng 崔 瑩 (1316Yŏng 崔 瑩 ( -1388)), introduces a general under Ch'oe Yŏng's command named Kim Chin 金 縝 (fl.1360s), who loved soju excessively, failed to do his duty, and was punished. 27fore this event, when Kim Chin was the head of Kyŏngsang province, he drank wines and played day and night along with officers under his command calling in many famous kisaeng (female entertainers).Because Kim Chin enjoyed drinking soju, people in the army called him and his men the "soju group."And because he assaulted and insulted his soldiers and assistants if they displeased him, they all possessed resentments and grudges against him.When Japanese enemies burned and looted the barracks in Happo 合浦, soldiers said: "Did the soju group defeat the enemy.How can we fight?"They then retreated and made no effort to go and fight.Yet Kim Chin fled alone on horseback, and the army was defeated in the end.Then he [Ch'oe Yŏng] degraded Kim Chin to a commoner and condemned him to exile to Ch'angnyŏng 昌寧 County, and then moved him to the island of Kadŏk 嘉德.Then he executed Yi Tongpu 李東榑 and Kim Wŏnkok 金元穀 of the Mongol regiment in Happo. Ch'oe Yŏng was one of the most important generals in late Koryŏ history.He supported the last kings of Koryŏ against the Yuan and also against the newly rising powers that would establish a new dynasty, named Chosŏn 朝鮮 (1392-1897). 28We can understand that the compilers of the Koryŏ sa included the story of Kim Chin in Ch'oe's biography in order to show that there were bad officers like Kim who violated the military code of conduct in crucial situations and that Ch'oe Yŏng punished them appropriately.The fact that a group of people who enjoyed soju to excess were also called a "soju group" shows that soju was a strong alcoholic drink, was commonly known, and its consumption had spread widely among Koryŏ armies. Another piece of evidence in the Koryŏ sa, in the section on prohibition, shows that soju was broadly popular in many sectors of society.An article issued in 1375 prohibited soju and other luxury goods such as silks and gold and jade wares in order to stop their consumption, because many people squandered their fortunes on them.Soju is not documented in any of the earlier Koryŏ-period literature by those who enjoyed alcoholic drinks, but it is suddenly found to be consumed among nobles and probably rich merchants (if not ordinary people) commonly like silks and other luxurious goods were consumed at that time. 29hether the soju mentioned in these two passages of the Koryŏ sa was distilled liquor is not clear, but this is likely from the name soju.We can assume from the contexts of these texts that soju was regarded as a strong and special alcoholic drink, and unlike earlier traditional Korean alcoholic drinks.Other Koryŏ sources do not explicitly reveal what kind of drink soju was, but another piece of evidence from the same period suggests that there was distilled liquor at the time.A poem by Yi Saek 李 穡 (1328-1396, also known by his pen name Mokŭn 牧隱), an important Neo-Confucianism scholar and a tutor to many major governmental officials of the late Koryŏ and early Chosŏn periods like Chŏng Tojŏn 鄭道傳(1342-1398), describes a liquor called aralgil liquor as […] forming like autumn dewdrops, and dripping down at night … after drinking half a cup of the liquor, a warm feeling spread to the bone. 30 can easily assume that this aralgil liquor described in the poem was produced by making a liquid extract from raw spirits through a distilling process, and that it was a strong alcoholic beverage.The fact that soju was also called noju 露酒, that is, "dewdrop wine," in later Chosŏn-period sources suggests that the distilled aralgil liquor described by Yi Saek, which was clearly popular among the literati, was related to soju at the time.Yet these earliest pieces of documentation do not reveal more details about these liquors such as their ingredients.Nor can we find out from Koryŏ-period sources how new types of alcoholic drinks began to increase in popularity in Korea during Koryŏ times. Sources from the succeeding Chosŏn dynasty provide more details about soju, including its origins, its relationship to aralgil liquor, and instructions on how to make soju, including the ingredients to be used.These sources include different kinds of literature written by Korean scholars, including informal essays, poems and writings for practical use.Several sources explicitly say that soju originated in the Yuan period.According to these sources, soju was transferred from China to Korea during the latter part of the Koryŏ era.This matches well with the sudden and simultaneous rise of soju attested by our sources at that time.However, there are passages with different content that some scholars have used to refute the Yuan-period transfer theory seen in many Chosŏnperiod accounts.Chang Chihyŏn, who examined the documentary sources most thoroughly and systematically in order to trace the origin of soju, developed different ideas into three theories: 1) soju was created during the Yuan-period; 2) soju was created before the Yuan-period; 3) soju was transferred from West Asia to China and Korea along overland routes. Chang found some logical gaps in the existing theories and tried to supplement them with new reasoning by connecting these theories from comparative perspectives.Yet Chang's analysis is limited to an examination of Korean and Chinese documents and studies about alcoholic drinks done up to the 1980s; he somewhat neglects technology and archaeology and fails to pay sufficient attention to some important historical contexts that would explain aspects of technology transfer in depth.Later works on the origin of soju in Korea have mostly relied on earlier studies like that by Chang.Moreover, although most of the Korean passages that discuss the origin of distilled alcohol imported their discussions directly from earlier Chinese passages or re-interpreted them, many of the earlier studies of the debate did not perform sufficient textual critique.These Korean passages about the origin and rise of soju in China have been the subject of major debates.There had already been huge debates about the general origin and transfer of distillation technology on a worldwide basis, and new findings and studies have fueled this even more.Yet, regarding the history of soju in Korea, after Chang's study, there has been no major new study.By considering these weaknesses in earlier analyses and using new evidence, let us discuss the most important points that help trace the origin of soju in Korea.We will divide and discuss the history of soju in Korea in terms of two basic periods: the Mongol period and the pre-Mongol period.Then we will examine the pros and cons of different interpretations for each of them. The Mongol Period Origin Theory The earliest Chosŏn period account about the origin of soju is the Tongŭi pogam 東醫寶鑑 (literally meaning "a precious mirror on the medicines of eastern [countries]") by Hŏ Chun 許浚 (1539-1615).This is the most renowned medical book of the Chosŏn period.The passage reads: Soju appeared beginning in the Yuan period.Its taste is extremely intense.Immoderate drinking will ruin your health. 31is description is short, yet several later Chosŏn-dynasty accounts, which basically say the same thing about the origins of soju, probably because of the influence of Hŏ Chun's account, give more details about its characteristics and include other associated dangers.For example, an early seventeenth-century writer says: Soju is a liquor that arose from the time of the Yuan dynasty.As it was only taken as a medicine, it was not used haphazardly.Due to this, it became a custom that small cups were called soju cups.In the present day, however, those of upper status drink great amounts, to their heart's content; in the summer they drink much soju from large cups.Drinking their fill and becoming drunk like this has caused many a person to suddenly die. 32at they used small cups for liquor clearly shows that the liquor was a stronger alcoholic drink, as Hŏ Chun mentions; this should be distilled liquor.This passage shows that soju was basically used as a medicine; in addition, it was often drunk by elites. Hŏ Chun wrote his medical book by reviewing all of the East Asian (mostly Chinese) medical works transmitted until his time, summarizing their most important content.It is not surprising, therefore, that the Yuan-period transfer theory started by Hŏ was influenced by information in the Bencao gangmu 本草 綱 目 (Principles and Categories of Materia Medica), written at the end of the Ming dynasty almost identical wording about the origin of Chinese shaojiu 燒 酒 , soju as found in Korean sources. 33The authors of the Chosŏn period often cited earlier Chinese accounts in making their points.Because of this, we have to examine the original Chinese sources used by Chosŏn authors, as some might have been written in different contexts and from different points of view.Other questions need to be answered as well: What were the distilled alcoholic drinks like that are mentioned in the Chinese sources?What were their names?When and where were they from?What kinds of ingredients were used?Li Shizhen's Bencao gangmu says that shaojiu, "roasted liquor," is huojiu ( 火酒, "fire wine") and a alaji jiu (阿剌吉酒, i. e., "araji wine").As its primary source, it cites Hu Sihui's 忽思慧 Yinshan zhengyao 飲膳正要 (Proper and Essential Things for [the Emperor's] Food and Drink), the official dietary manual for the Mongol court in China, presented in 1330.As the earliest extant documentary source on alaji (aralgil), this account renders the liquor's name according to its Uighur pronunciation, arajhi. While it does not discuss its ingredients, it does establish arajhi as a distilled liquor and describes the distillation process. 34While not found in Hŏ Chun's book, many sources of the Chosŏn period were influenced by Li Shizhen's account, which refers to shaojiu, the Korean soju, as aralgil 阿剌吉wine.Sŏ Yuku 徐 有 榘 (1764-1845), a Silhak (practical learning) scholar of the eighteenth and nineteenth centuries, cited Li's sentence about shaojiu in order to argue that the alcohol originated in the Yuan period. 35Besides this, the fact that Yi Saek's poem (cited above) also refers to the distilled wine as aralgil rather than as shaojiu, or soju, shows that the distilled drink had found its way to Korea under the name arakhi / arajhi before Li Shizhen's Compendium arrived in Korea, in the early Chosŏn period.Another Yuan-dynasty account, "Poem about Arajhi Liquor" (Yalaiji jiu fu 軋賴機酒賦) by Zhu Derun 朱德潤, uses a different set of Chinese characters with a similar pronunciationyalaiji 軋賴機-to describe the liquor. 36According to anthropological research conducted in the early twentieth century, many regions of the Korean Peninsula called soju arajhi liquor in similar yet slightly different forms of transcription, such as "arak liquor" (arakju) or "arang wine" (arangju). 37Various other transcriptions of arajhi 33 Chang Chihyŏn 1989, 61.34 Laufer 1919, 236f; Luo Feng 2012, 505f; Buell 2015.35 Chang Chihyŏn 1989, 49f, 61.36 Liu Guangding 2002, 317.37 Chang Chihyŏn 1989, 56ff.are found in documentary sources from the Chosŏn period, e. g. in Yŏn'gyŏngjae chŏnjip 硏經齋全集 (Complete works of Yŏn'gyŏngjae) by Sŏng Haeŭng 成海應 (1760-1839). 38This again suggests that arajhi / arak transferred to Korea through diverse channels, one of them at least associated with the mixed elite, Turkic and other, of North China during the time of transmission.In his study, Chang Chihyŏn tried to examine aralgil liquor and soju side-by-side and trace their origins.He concluded that, because many sources define soju as aralgil wine, the latter liquor existed before soju.He also provides more details about these two kinds of distilled liquors, such as their ingredients.Citing several sources including encyclopedias, Chang argues that the alajhi or aralgil wine that began to appear in Yuan-dynasty documents were based on milk fermentation, and that the Chinese imitated the Mongol distilled liquors and began to make distilled liquors using their traditional brewing materials calling it shaojiu to distinguish the new products from Mongol arajhi. 39From this perspective, we should now investigate available sources about the arajhi wine in order to understand its relations to shaojiu and Korean soju. Recent findings strongly suggest that arajhi or aralgil, consumed in China before its transfer to Korea, was based on cow's and mare's milk (or even camel's milk).In his article about the recent discoveries in Mongolia of stills that date back to the early part of the Yuan dynasty or even slightly earlier, Luo Feng 羅豐argues that the stills were created to distill airag or kumiss, the alcoholic drink based on a form of fermented mare's and cow's milk popular among the Mon-gols and in other nomadic societies. 40From there, it continued to develop, to be consumed as arakhi / arajhi in distilled form in modern Mongolia and other nomadic societies inhabiting northern Eurasia although due to the influence of Islam most Kazakhs no longer distill mare's or cow's milk.Luo Feng also intro-duced eyewitness accounts of the stills and distillation practices of the Volga Kalmucks and Mongols written by the late-eighteenthcentury German zoolo-gist and botanist Peter Pallas (1741-1811), to which Paul Buell has now added an extensive comparative analysis. 41Considering the archaeological findings and following the subsequent developments of distillation in Mongolia, it is highly likely that the arajhi consumed by the Mongols, and also documented in Yinshan zhengyao in the fourteenth century, was based on mare's or cow's milk fermentation.On the other hand, Li Shizhen's Bencao gangmu of the Ming period, which introduces shaojiu as being arajhi liquor citing the Yinshan zhengyao, clearly testified that people in his time made distilled liquor based on grainfermented liquors. 42As Chang Chihyŏn suggests, it is possible that the Mongols used mare's or cow's milk when they first popularized the distilled wine called arajhi, yet that when it spread in China, people began using fermented grains to make distilled liquors, as documented in the Ming-dynasty account. Then, as such distilled liquors spread in China, were they then transferred to Korea during Koryŏ times?What were the relationships between Mongol China's arajhi liquors and Korean soju?How and why did arajhi and soju become identified in the Chosŏn period?How did such drinks become popular in Korea so quickly?These are only some of the questions that we have to tackle in order to solve the mystery of the sudden rise of soju at the end of the Koryŏ period based on findings of earlier and more recent studies. It seems clear that the nomadic Mongols played a major role in facilitating the spread of arajhi / arakhi in China and in its transfer to Korea.Before we look at the historical context, however, we have to address the challenges offered by another theory that argues, based on different interpretations of sources, that such distilled liquors existed before the Yuan dynasty.It will help us to investigate details such as the ingredients of distilled liquors and a possible transfer of distilled liquors to Korea before the time of the Mongols, at least on a limited scale. The pre-Mongol Period Origin Theory The Korean document from the Chosŏn period that states explicitly that Korean soju and aralgil liquor existed before the Yuan and that challenges the Mongol-period origin theory most systematically is one written by Yi Kyukyŏng 李圭景 (1788-1856) Noju is soju, and some people call it hwajiu 火 酒 (fire wine).It became known in China from the Yuan period.In general, it was imported from the foreigners of the maritime southeast (Sŏnambŏn 西南番).It probably originated in the Tang-dynasty period. Here, the term "foreigners of the maritime southeast" means societies in Southeast, South and West Asia accessed through the then maritime routes.This passage suggests that shaojiu / soju was transferred to China from the southern maritime routes, rather than via the northern routes used by the Mongols.It also implies that it is possible that soju was transferred to Korea before the Mongol period.Yi Kyukyŏng cited as his primary source a Chinese household encyclopedia, entitled Jujia biyong shilei 居家必用事類 (Essential Things for Living at Home), which was probably written in the late-Yuan or early-Ming period.The main passage in question which unmistakably refers to distillation and seems to refer to a Western-style distillation apparatus with a serpentine reads: 44 "Cooked Liquor" Whenever one cooks liquor, use 2 qian of wax, 5 slices of bamboo leaf, and "official" Arisaema japonica, a fine half a kernel for each tou.open up and look.If the liquor is boiling then it is ready.Put into the fire for a long time.When you take it down put it into lime.One should not move continuously. One wants the white liquor to expel to obtain the clear [distilled] liquor.Afterwards when cooking again and again, use mulberry leaves to repose.This is to prevent the aroma qi [vapor] from being cut off. Elsewhere the text has some of the same phrases about the origin of soju as those found in Yi Kyukyŏng's account. 45 Saek and Li Shizhen, or the yalaiji 軋賴機 as used by Zhu Derun.Yi was able to use the greater number of sources available to him in order to enhance his theory.First, he provides another name for soju, noju (dewdrop wine), which is only found in Korean sources, including the Koryŏ-period poem composed by Yi Saek.Second, he cites other Chinese documentary sources to argue that soju / shaojiu indeed should be traced back to before the Tang dynasty. 46In fact, before Yi Kyukyŏng, Yi Kyukyŏng's grandfather and another practical-learning scholar named Yi Tŏkmu 李德懋(1741-1793) refuted the Mongol-origin theory by citing in his work a Song-dynasty account that talks about Xianluo jiu (Xianluo liquor, 暹羅酒), a liquor, judging by its name, possibly of Southeast Asian origin, that was twice-brewed from shaojiu.Yi argued that shaojiu / soju already existed during the Song dynasty, and that because shaochun 燒春 existed during the Tang, it is possible that shaojiu / soju also existed before the Song period, too. 47hile the arguments by Yi Kyukyŏng and Yi Tŏkmu are based only on documentary sources available to him at that time, it is both supported and challenged by new findings in recent studies.Based on a variety of sources including archaeological finds, Huang Hsing-tsung shows in his volume in Joseph Needham's Science and Civilization in China series that the Chinese invented distillation methods before the Mongols, as far back as the Han dynasty. 48Feng Enxue 馮恩學, in the most recent paper about archaeological findings related to distilled alcohols in China, proves that distillation technology existed in the later Han period, and argues that the earliest documented textual evidence as opposed to archaeological evidence for Chinese distilled alcohols is found in an account from the Song dynasty. 49i Kyu-kyŏng's argument that shaojiu (but not particularly Korean soju which shares no more than a name with shaojiu) originated before the Yuan period, is supported in our sources but this may not mean much for Korea.The question of the role of the maritime routes needs further study from broader historical context including evidence for very early distillation in India discussed by Needham et al.To make his point, Yi cites the Jujia biyong shilei, which he interprets as saying that shaojiu / soju came from the southwestern barbarians, that is, societies reachable via maritime routes.What the one late Yuan-early Ming account does suggest is that the spread of arakhi, the form of the word in the Jujia biyong shilei, is related to the maritime trade that flourished during the Song-Yuan period, but exactly how is not entirely clear.However, Yi Kyukyŏng ferociously argued that the text had probably a much greater significance.He discusses the spread of soju via maritime routes, even to Okinawa, by utilizing new sources available to him at the time of his research.That is, the situation regarding the transfer and spread of distilled liquors in Yi Kyukyŏng's time had grown more complicated.In his case, the arrival of Europeans in Asia might have worked a new variable into the region's liquor trade, making some of his evidence uncertain.Therefore, rather than take Yi Kyukyŏng's argument at face value, it is important to consider that the historical context behind the transfer of distilled alcohols that he describes differs from the context that surrounds the original Yuan-dynasty sources. While we can put aside Yi Kyukyŏng's arguments of the nineteenth century, it is still important to pay attention to the reference in a Yuan-dynasty source taken to indicate the origin of the arakhi liquor via southern maritime routes although the passage in question only marginally supports such an interpretation.For sure, many goods including spices came to China during and prior to the Mongol period through the maritime connections in which many merchants from Southeast, South, and West Asia actively participated.The Mongol period witnessed an unprecedented boom in maritime trade, yet the international trade through the sea routes had already been flourishing even before the Mongol period.There is a possibility, therefore, that distilled liquors were transferred to China and Korea before the Mongol period, if we consider another theory of its origin from West Asia through India and Southeast Asia, which has been argued by some earlier studies. 50How then can we interpret this case in connection to other sources like the archaeological findings discussed above? In order to validate the credibility of this southwest transfer theory, let us summarize a possible scenario of distilled-liquors transfer that we can consider based on discussions offered above.Distillation technology had already developed in China from ancient times, and it is possible that the Chinese commonly drank distilled alcohol based on grain-fermented wine during the Song dyn-asty.The Mongols, as they expanded their political power across Eurasia, adopted Chinese distillation technology in order to create the kind of milkferment liquor they drank.They called this new liquor arakhi / arajhi, appropriating a foreign term, and thereafter popularized it wherever they expanded.When they conquered southern China and established the Yuan, they encountered Chinese who manufactured distilled alcohols based on grain fermentation, a method which had evolved thanks to China's largely agricultural environment.They might have continued calling their traditional liquor shaojiu in order to distinguish it from the arakhi / arajhi popularized by the conquering Mongols.Similar patterns of transferring, spreading, and transforming distilled liquors like arakhi / arajhi and later soju could apply to Korea, which fell into the sphere of Mongol influence at this time.Where, then, did the name arakhi / arajhi come from?It may have originated somewhere in the maritime trade networks via South and Southeast Asia but the term was still transformed and Turkicized once it had arrived, and identified with Chinese shaojiu. Based on linguistic analyses, there have been differing theories about the origin of the term arakhi / arajhi proposed, including two major theories.The most convincing theory claims that the wine's name comes from the southern Arabic word araq for water drops (sweat or sap of trees). 51Some scholars who support this theory have argued that distilled alcohol called by this name was imported from West Asia to East Asia through either maritime or overland routes.Detailed arguments vary, yet a major claim of this theory is that distillation technology was first invented in ancient Greece under the name alambic and then transferred to Persia [the Middle East], where it was given a new name and technology for making wine; then it was transferred to India (although Indian distillation is very old and may be an independent invention) 52 through merchants before travelling southeast to Southeast Asia through the maritime routes and even northeast via the overland routes; along both routes, according to this theory, it traveled on to China and Korea.This theory of a possible West Asian origin of soju was discussed first in Korea by Ch'oe Namsŏn (1890-1957) 崔南善, a pro-modernization scholar active from the late Chosŏn era to the early years of the Republic of Korea.By considering that aralgil and similar transcriptions derive from the word arakhi / arajhi, Ch'oe Namsŏn argued that the West Asian culture of distillation first influenced the transfer of distilled alcohol east, 51 Laufer 1919, 237.52 Allchin 1979.which continued through the contact routes to eastern Eurasia. 53In fact, he proposed that soju was transferred from West Asia to China and Korea through the northern Eurasian routes rather than the southern maritime routes. 54Introducing this argument by Ch'oe as a distinctive theory, Chang Chihyŏn concluded that, while it sounds plausible, it is also worth considering the transfer of distilled liquors by West Asian merchants via southern maritime routes.Another theory about the origin of the term arakhi / arajhi posits that the liquor comes from the areca nut in India, which was called arrak for many years. 55oth theories demonstrate the possibility that a distilled liquor called arakhi was transferred to China through its maritime trade with India and Southeast Asia.(Since some Yuan-dynasty sources mention that the drink originated via the southern maritime routes, I will address the history of distilled liquors in Southeast Asia in another work.)We can now consider it possible that this southern arakhi gradually attracted the attention of Chinese, who had their own shaojiu traditions too, after it was imported to China as a trade good, and that the Mongols assimilated it together with Chinese shaojiu traditions as maritime contacts grew.Documents report that northern peoples enjoyed soju more than southern peoples due to their colder environments, and all of the stills excavated have been found in northern China, so it is highly likely that the Mongols adopted and developed distilled liquors and modern Mongol distillation technology is a variant of Chinese. If we consider the prevailing theory that the name araq originated from Arabia, it is indeed possible that a Middle Eastern or Indian distilled alcohol was imported from West Asia through traders traveling to other regions, who referred to it using a variety of transcriptions of its original name arak.Some scholars even suggest that it was transferred from West Asia to Korea by West Asian merchants during the Silla period through Tang-dynasty China, as there are some pieces of evidence for the travel of West Asian merchants all the way to Silla. 56However, there are also problems with this theory, because West Asian merchants, most of whom were Muslim, probably did not often consume or deal in alcoholic drinks because of their religious ban, although this was not so true yet in that period; while they were allowed to use alcohol for medical pur-poses, it might have been difficult for them to carry it with them as a trade good, especially on a large scale. 57Moreover, the fact that the Chinese had already developed some distillation technology independently from the beginning of the first millennium and used it at a limited degree before they had developed active trade relations with West Asia, a fact supported by recent studies and archaeological excavations, challenges these direct West Asian origin theories.In addition, sources that talk about trade goods do not mention it explicitly as one among them.Even though distilled liquors were transferred through these avenues to a limited degree, it is highly likely that distillation technology, as they found it in China whatever its ultimate origins, spread during the Mongol period, when the Mongols connected Eurasia and promoted unprecedented contacts.New studies show that West Asian merchants also visited Koryŏ on a large scale during the Mongol period. 58These would work as complimentary factors to accelerate the promotion of distilled liquor in Korea during the Koryŏ period. In sum, until more concrete pieces of evidence become available, the debate about the origin of distillation and its transfer to Korea will continue.Yet, in fact, all of the available sources discussed above do not deny the key fact: distilled liquors including arakhi and soju began to spread rapidly in Mongolia and China beginning in the Mongol-Yuan period.These types of distilled liquors that developed through complex processes in China under Mongol rule could have been transferred to Koryŏ through its political and military relations and the international commerce these relations facilitated, leading to the rapid rise of soju in Korea.Whichever way it was, it is clear that the Mongols played a major role in causing a big change in the drinking culture of Korea.The following section will examine the historical context. The Avenues for the Rise of Soju in Korea: via Mongol Army Camps, International Trades, and Cultural and Technological Influences through Scholarly Contacts All of the existing theories about the origin of soju in Korea, including those proposed by Chang Chihyŏn and others reviewed above, focus more on documentary sources and linguistic analyses.They fail to devote sufficient attention to the transfer of distillation technology through cultural contacts between the Mongols and Koreans.All of the soju and arakhi liquors that became popular from the Koryŏ period on are distilled alcohols and thus involve a special technological method called distillation.This makes their transfer different from those of other liquors that were brought from other societies like China to Korea as diplomatic gifts or as trade goods.Unlike other alcoholic drinks of special kinds like grape wines, the official Koryŏ-period documents do not record an occasion of soju transfer as an official gift or trade good.Therefore, we have to assume that it came to Korea through other means.It was also difficult in premodern times for a special gift to be transmitted into a different society so quickly, and the fact that several other liquors of foreign origins did not spread so quickly supports this conclusion.Here, we have to pay attention to the fact that the Mongols who influenced Korea from the mid-thirteenth century had enjoyed drinking liquors including kumiss. 59If they brought their brandies to Koryŏ, how did they do so, and how did this new type of alcoholic drink become popular so quickly? We have to view historical context more carefully, focusing on the many unprecedented conditions and circumstances of Koryŏ society that emerged only at that time.The Koryŏ sa shows active political and cultural relations between the Koryŏ dynasty of Korea and Yuan dynasty of China.This served as a facilitating environment for the transfer of distillation technology, among other things, from China to Korea.Starting in the early thirteenth century, in the course of their Eurasian conquest, the Mongols invaded Koryŏ.Koryŏ resisted for about 30 years but finally sued for peace in 1259 and became a quda (marriage alliance) state of the Yuan dynasty.After that, the monarchs of Koryŏ became imperial sons-in-law (khuregen) until King Gongmin 恭 愍 (r.1351-1374) began to push the Yuan garrisons back around 1350.It is not strange to see many economic and cultural exchanges taking place through the diplomatic relations of that time, including the transfer of soju.The Mongol soldiers stationed in Koryŏ brought foods that they enjoyed, and for soldiers, alcoholic drinks were important.Mongol drinks such as airan / kumiss were also foods of great prestige for the conquerors. In fact, Yi Sŏngu 李盛雨 (1928-1992) had already argued before Chang Chihyŏn that the Mongol army probably brought distilled liquor to Koryŏ during its invasion.Yi Sŏngu suggested that, since its first introduction to 59 Bayarsaikhan 2016.Korea, soju developed well in certain locations, such as the capital of Kaesŏng, Andong, Chindo, and Cheju Island, wherever Mongol troops were stationed or waged battles on the Korean peninsula. 60Chang Chihyŏn did not cite or introduce this theory, yet later works all mention Yi Sŏngu's theory as the most convincing method of transfer and spread of soju in Korea.In fact, no concrete pieces of evidence support his theory. 61However, the story about Kim Chin's soju group in the Koryŏ sa strongly suggests that indeed the Mongol army camps on the Korean peninsula might have influenced Koryŏ soldiers.Once the Koryŏ government surrendered to the Mongols after a long period of resistance, the official Koryŏ army cooperated with the Mongol army to crush the Sambyŏlch'o 三別抄, a powerful military unit of the Koryŏ dynasty that resisted the Yuan and their new Koryŏ allies.When these soldiers cooperated during the battles, it is highly likely that the Mongol soldiers, who enjoyed kumiss, an alcoholic drink made based on fermented mare's milk, during their Eurasian conquests, needed to distill it in order to preserve it for a long time during their stay in Korea, and also taught the distillation methods to Koryŏ soldiers.The Mongols were also famous for relocating people to different places, including many craftsmen, as seen in the account of William of Rubruck. 62As a consequence, relocated people contributed to transfers of many new cultural elements between different societies, and these probably include distillation techniques.If those who knew how to make distilled alcohols were among the Mongol soldiers stationed in Korean army camps, it would have been easy for them to introduce this new technology of distilling liquor to Korea for their own consumption.In this way they naturally influenced Korean soldiers. Technology transfers could have continued after Sambyŏlch'o was defeated in 1273, as Koryŏ began to experience Yuan intervention on a full scale.Many Mongols came to Koryŏ as exploiters, and many Koryŏ people were brought to Yuan China unwillingly as tribute.Some of the Koryŏ people brought to Yuan luckily returned to Koryŏ after staying in China for many 60 Yi Sŏngu argues (1984, 216) that the fact that, from this early time on, these places became renowned for their production of high-quality soju proves Mongol influence on the development of soju.61 Pae Kyŭng-Hwa argues in her MA thesis (1999, 62) about Andong soju that there is no direct evidence for this.62 For example, see an episode about William the craftsman from Paris in the account by William of Rubruck, translated into English in Jackson 1990, 183, 209ff.years.Other Koryŏ people went there for other reasons such as official and scholarly exchanges.The massive and unprecedented movement and exchanges of people between China and Korea contributed to cultural and technology transfers, including distillation.At that time, many foreign merchants including those from West Asia also came to Korea to conduct trade. No source explicitly suggests that these foreign merchants brought distilled alcohols to Koryŏ, yet it is possible that they did so, considering that the Mongol influence on Korean society overall enabled many social and cultural changes, which require further study.Kim Janggoo, a historian of the Mongols, suggests one reason for the sudden popularity of distilled alcohols in the late Koryŏ period may be the introduction of meat eating by the Mongols.Compared to more vegetarian culinary habits during the Koryŏ period based on Buddhist influence, the Mongols promoted meat eating in Koryŏ, as well as stronger wines with stronger alcoholic percentages, such as distilled alcohols, which became a better fit with meat eating. 63e have also to solve another mysterious issue surrounding distilled wine: What was the ingredient in soju that became popular through Mongol influence?The arajhi enjoyed by the Mongols in Eurasia was probably based on kumiss, mare's or cow's milk wine.However, soju, which became popular in Korea, was made with grains like rice and barley.That is, they distilled traditional clear-strained wine (ch'ŏngju) made using fermented grains.We have learned from our sources that mayuju (mare's milk liquor) was brought to Korea in 1231 in the course of the Mongol invasion.Yet mayuju or distilled alcohol based on fermented mare's milk did not become popular in Korea possibly due to fewer horses there than among the Mongols.And therefore the Mongols themselves also probably applied the same distillation technology to Korean traditional alcoholic drinks such as clear strained wine made of rice wine, or taught Koreans to produce and promote the creation of a new kind of distilled alcohol -that is, soju.Chinese sources of the Ming-Qing period say that the Chinese also used fermented grains to make distilled alcohols.Therefore, we need to compare the case of the development of soju in Korea with the development of distilled alcohols in China.Yet, soju became more popular in Korea 63 Kim Janggoo provided this outline of an idea during his discussion on the paper at the Korean Association for Central Asian Studies Annual Conference in Korea on April 23, 2016.These various cultural factors that influenced the sudden rise of soju in late Koryŏ will be a topic of a separate chapter in the proposed monograph. as such, and therefore this localization and popularization of the new type of distilled alcohols in Korea is worth noting.The new distillation technology made it possible to transform the traditional strained rice wines (ch'ŏngju) with an alcohol content of at most 6 to 8 per cent to a much stronger wine, soju, with a much higher alcohol content of more than 20 per cent.This new kind of distilled liquor began to grow popular as a high-class liquor at that time, and people used both terms, arakhi and soju, for the same distilled liquor that was by then mostly based on grain fermentation.However, Yinshan zhengyao also mentions "Small Coarse Grain" liquor along with other distilled liquors like Arajhi liquor and Sürmä liquor. 64Paul Buell argues that this might be a sorghum distillate since sorghum was later very popular in China for making a kind of whisky (called like the grain, gaoliang 高粱). 65 the account of William of Rubruck in the thirteenth century shows, the Mongols drank not only kumiss but also alcoholic drinks based on materials such as grapes and grains in the course of their expansion into Eurasia; 66 therefore, it is necessary to investigate the possibility that the Mongols used these other ingredients to make distilled liquor. When considering the origin of distillation transfer, we should consider the technological aspects too, such as distillation itself.Unlike other types of liquormaking, the production of distilled alcohol requires a basic knowledge about the fundamentals of the distillation process and proper tools like stills.By examining all available documentary and archaeological sources, Paul Buell has argued that the Mongols popularized distillation technology by making stills portable as are the Mongol stills of today, even made of wood.Without these portable devices, it would have been difficult for the Mongols to transfer this technology to other societies rapidly. What kinds of stills were used to make the first distilled soju or arakhi in Korea at the end of the Koryŏ dynasty?Unfortunately, no actual still is available from the period.However, we can gain some ideas of what the early Korean stills were like by looking at document descriptions, archaeology, and anthropological studies providing evidence for stills used during the Chosŏn period, because the earlier late-Koryŏ-period stills continued to be used under Chosŏn, 64 Buell et al. 2010, 498.65 Buell 2015.66 Jackson 1990, 178f, 191.albeit with possible minor modifications.The basic process for distillation remained the same throughout premodern Korea, with great similarities to the processes of other societies in Asia. The traditional stills for distilling soju in Korea were called soju kori 燒酒古里.Kori, a Korean word meaning a ring or circular object, was probably adopted to signify the traditional bowl that was used to distill soju.A Korean dictionary defines it as being "made of copper or glazed earthenware as a folded pair, with one at the top and one at the bottom."67This type of still brought distilled ex-tracts to a collector outside the stills through pipes.Most of the models for stills exhibited in museums in Korea are of this type.Once-strained wine was placed in the bottom pot and then another smaller pot was positioned upside-down to cover the bottom pot.The upper pot has a lid for cooling water.In order to pro-duce soju, first one heats the pot; and then, one should pour cold water onto the lid so that the evaporated alcohol inside the pot condenses and collects on the lid, and from there gradually trickles down into the waiting pot.While the above-mentioned still was probably the most popular type of still used in Korea, scholars who have examined stills used in different provinces of Korea attest that Koreans used different kinds of stills made of wood, earthenware, or brass, and different kinds of containers for collecting distilled extracts inside the still or outside through a pipe (for several types of the Korean traditional stills for distilling soju, see figure 1). 68In fact, the stills that developed in Korea have many similarities with those developed in China and Mongolia.These are quite simple and portable, and different from large stills that were used in China during the Ming-Qing period.It should have not been difficult for the technology to be transferred to Korea, and probably to other societies like Turkey and Mexico, cases supported by some archaeological findings.Soju production and consumption continued in China after the fall of the Mongols, yet its development differs from that of Korea in many respects.Therefore, it would be worth comparing them to understand why soju became popular so quickly in Korea, and what cultural interactions occurred in such a special situation.This will be done in a chapter of the proposed monograph. Conclusion This paper re-examines the rise of soju at the end of the Koryŏ period, which marked a new era in Korean drinking history from the perspective of distillation-technology transfer in Eurasia during the Mongol period.While making use of all the sources available to date, the relative lack of material forces us to rely on reasoning and inference to create the most comprehensive and convincing explanation possible.By comparing it with earlier traditional Korean alcoholic drinks, we have clearly seen how soju was distinctive and new.Yet our sources do not clearly indicate when and how soju spread to and in Koryŏ at that time.That is why many different theories have competed for preeminence.This paper has reviewed earlier theories including those by Chang Chihyŏn and Yi Sŏngu, and also examined the most recent studies done in different languages, and also new archaeological findings.Because of the space limitation here, more detailed analyses will be carried out in the proposed monograph; however, we can propose the following provisional conclusion from the current examination.First, distillation developed independently in China.Yet it was the Mongols who adopted distillation technology from other cultures such as China to make distilled alcohols using the mare's milk ferments that they enjoyed.They named it arakhi, a foreign word from West Asia that migrated overland and via the sea routes, and popularized it in large parts of Eurasia including China and Korea under the Mongol influence in the course of their mobilizing goods and people there, including soldiers and merchants.Merchants from different societies active in the international trade of the time probably accelerated the transfer processes. The case of Korea, where soju became popular right after the coming of the Mongols, is supported by a good number of documents and historical contexts.That some Mongol soldiers recruited to Korean army camps were possibly from craftsmen families who were able to introduce distillation technology is a quite likely scenario.While we cannot deny the possibility that soju that is, shaojiu, was transferred earlier from China to Korea, no evidence supports this so far.Available pieces of evidence all clearly say that distilled alcohol spread widely only after it was transferred from China to Korea during the late Koryŏ period. The case of soju transfer clearly shows that a big cultural influence could occur through exceptional historical changes.Unlike some foreign alcoholic drinks, which transferred beyond their cultural zone as tribute and then spread very slowly among kings and nobles, soju spread quickly for a short period of time under unprecedented historical conditions, such as "Korea's close connection to wider parts of Eurasia," through the Mongol empire.This development is intriguing as it involves a transfer of technological knowledge.Once the principles and basic methods are learned, this new knowledge could be simple.Yet it also requires things such as tools, e. g., stills; the people in Korea adopted basically the portable stills used by the Mongols and developed them further.While a form of arakhi based on mare's milk was introduced by the Mongol soldiers or foreign merchants to Korea, mare's milk was less available in Korea, and therefore they also used their traditional brewing materials, that is, fermented grains for distillation, and called the new brew soju, after an older Chinese term, in order to distinguish it from the original arakhi, as contemporary Chinese under the Mongol rule also did calling the new distillate shaojiu or, in the plays, sayin darasun, good liquor.As the distilled alcohols grew continuously in popularity during the Chosŏn period, their production was based solely on grain fermentation, though people continued to use both terms, arakhi and soju, to refer to the same distilled liquor. In short, while this kind of new technology transfer sounds simple, it actually involved very complicated interactions between different cultures amidst dynamic historical contexts in order to ultimately create a new product.A big change in liquor culture occurred in Korea, while it was involved in large-scale Eurasian cultural contacts and technology transfers through its direct contact with the Mongols.The story of the rise of soju in Korea is a good example of the rise of a new cultural element based on tradition and innovation, involving both adaptation and localization of new technologies.A further investigation as part of a larger study of the history of distillation on a worldwide basis will help us explore significance of the case of Korean soju in global history. Transform and put into the liquor.Close up tightly according to method.Place inside a boiler.([subtext] During autumn and winter use an Arisaema japonica "pill."During spring and summer use wax and bamboo leaves).After that start the fire.Wait until the aroma of the liquor penetrates up into the boiler twists [of the apparatus].The liquor will come forth in profusion.Then raise the boiler again.Then take up the entire pot [with the liquor], As for arajhi, Yi Kyukyŏng refers to it as aliqi 阿 里 乞 , another transcription of the name arakhi / arajhi.A form of arakhi / araq is used in Jujia biyong shilei, rather than the alajhi 阿剌吉 used by Yi
2019-04-27T13:04:04.395Z
2016-06-01T00:00:00.000
{ "year": 2016, "sha1": "33af7997670dcb02fb6c4419938bc90211b7fe7a", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.29174/cas.2016.21.1.004", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "33af7997670dcb02fb6c4419938bc90211b7fe7a", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Environmental Science", "Geography" ] }
20157849
pes2o/s2orc
v3-fos-license
Distance learning strategies for weight management utilizing online social networks versus group phone conference call Summary Objective The increase in technology and online social networks (OSNs) may present healthcare providers with an innovative modality for delivering weight management programmes that could have an impact on health care at the population level. The objective of this study was to evaluate the feasibility and efficacy of using an OSN to deliver a weight loss programme to inform future, large‐scale trials. Methods Seventy individuals (age = 47 ± 12.4, minority = 24.3%) with obesity (BMI = 36.2 ± 4.0) completed a 6‐month weight loss intervention and were randomized to either a conference call or OSN delivery group. Weight loss was achieved by reducing energy intake by 500–700 kcal·d−1 below estimated total daily energy expenditure and progressing physical activity to 300 min/week. Behavioural weight loss strategies were delivered weekly throughout the intervention. Results Conference call and OSN groups produced clinically meaningful weight loss of ≥5% from baseline to 6 months (phone = −6.3 ± 6.4%, OSN = −5.8 ± 6.7%). There was no significant difference in weight change between groups (p = 0.765). Conclusion The phone and OSN groups met the American Heart Association/American College of Cardiology/The Obesity Society's Guidelines by reducing baseline weight by 5–10% within 6 months. OSNs appear to be a viable delivery platform for weight loss interventions; however, larger scale adequately powered trials are needed. Introduction With the high prevalence of obesity in US adults (1) and obesity contributing to many comorbidities such as heart disease, hypertension, diabetes, some cancers, psychosocial and economic difficulties (2)(3)(4), developing innovative methods to translate and deliver weight management interventions in diverse real-world settings is critical. Several alternatives to traditional face-to-face group delivery of weight management interventions including mail or email (5,6), internet (7,8), a combination of phone, internet and email (9), individual phone counselling (10,11), group phone counselling (12) and text messaging (13) have demonstrated clinically meaningful weight loss (14). Online social networks (OSNs) such as Facebook© or Twitter© have become a popular platform for information exchange and delivery. OSNs provide a platform where individuals can create public or semi-public profiles, connect ('friend') with other users, interact and obtain information, within a bounded system (15). Furthermore, OSNs support the upload of files, photos and audio recordings that can be accessed by anyone in one's network or by all viewers. A recent Pew survey on internet use reported 84% of adults in the USA are online with 74% reporting use of OSNs (16). Because of this popularity, OSNs are a potential platform for the delivery of health behaviour change interventions, such as weight management, to large groups of individuals in a cost-efficient manner. Previous research has demonstrated a positive relationship between social support and weight loss (17)(18)(19)(20). For example, Wing et al. found that 66% of individuals who completed standard behavioural treatment plus social support maintained their weight loss in full, compared with 24% who received standard behavioural treatment without social support (17). However, there is limited research on behavioural weight loss interventions delivered using OSNs (21)(22)(23)(24)(25). The majority of previous trials used components of OSNs, such as bulletin boards or chat rooms in combination with traditional weight management intervention delivery rather than OSNs as the primary platform. The limited number of trials that used OSNs as the primary delivery platform was of short duration (less than 6 months) and had conflicting results. Sepah et al. examined participant's receiving a weight management intervention through a study-specific OSN that incorporated small-group support, health coaching, the Diabetes Prevention Program curriculum and digital tracking tools. At 16 weeks, participants lost 5.0% of their baseline weight (24). However, not all studies have been promising; a review by Ashrafian et al. (26) found that interventions using components of OSNs for weight management produce a modest 1.4% greater reduction in body weight compared with control participants. Furthermore, Greene et al. (21) reported a 6-month intervention using a study-specific OSN platform, combining education materials on diet and physical activity (PA) with social networking verses a control group receiving only education materials produced 2.8% and 0.8% weight loss, respectively. Similarly, Pagoto et al. (23) examined participants who received a weight loss programme using an established OSN (Twitter©) and reported weight loss of 3% from baseline to 12 weeks. The inconsistencies in results for studies using OSNs to deliver weight management are partially attributed to the variation in intervention design and differences in OSN platforms. Moreover, the reduced cost of OSN-delivered intervention has been cited as a major advantage (26); however, a cost analysis for OSN interventions has yet to be completed. Therefore, to evaluate the practicality and efficacy of using an OSN to deliver a weight management programme, we conducted a 6-month randomized feasibility study comparing weight loss between an established cost-effective weight management delivery system (group conference call) (12) compared with an OSN (Facebook©) delivery system. The primary aim was to determine if weight loss at 6 months was significantly different for participants randomized to group conference calls or OSN groups. Secondary outcomes included changes in waist circumference as an indication of reduction in chronic disease risk, dietary intake and PA to help explain both group and individual differences in weight change. Lastly, a cost analysis of the group conference calls and OSN delivery was conducted. Methods The rationale, design and methods for this trial have been described in detail in a previous publication (27). Information herein pertains to the current report. Participants Seventy adults with obesity (age = 21-70 years.; body mass index [BMI] = 30.0-45.0 kg·m À2 ) were randomized at a 1:1 ratio, stratified by sex, to either group phone conference call or OSN groups. Potential participants were excluded if they were unable to participate in moderate intensity PA (i.e. walking), were regularly exercising (>90 min/week) or at serious medical risk (as determined by the studies of a medical director). Additionally, participants were excluded if they reported any of the following within the previous 6 months: participating in a weight loss or PA programme, not weight stable, pregnant lactating or planned pregnancy. To improve the generalizability, individuals with chronic medical conditions were allowed to participate because they represent the population typically seeking weight management. For instance, individuals with hypertension or type II diabetes were not excluded if their condition was controlled by medication. Written informed consent was obtained from all participants prior to participation as approved by the Human Subjects Committee at the University of Kansas Medical Center. Participants were allowed to keep the Fitbit Flex wireless activity monitor used to self-monitor PA (described in the succeeding sections) as compensation for participating in the trial. Intervention-conceptual framework We employed social cognitive theory (28,29), problemsolving and the relapse prevention models (29)(30)(31). Key elements of the intervention, incorporated through in-class/online discussions and out-of-class assignments to facilitate change in both diet and PA, included goalsetting, self-monitoring, direct reinforcement, interaction with health educators and social support to facilitate change in diet and PA. Health educator training/standardized materials Health educators who delivered the intervention were experienced in weight management and had backgrounds in nutrition, exercise physiology and/or behavioural counselling. Health educators were trained to deliver the phone intervention by listening to recordings of sessions delivered by phone from a previously completed trial (12) and participating in mock group sessions that simulated live groups. Health educators were randomly assigned to administer one conference call and one OSN congruently to reduce the potential for health educator bias. The diet and PA protocols, behavioural lesson topics and experiential learning assignments were identical for both groups. Online social network intervention Participants joined a private OSN group of 12-18 individuals using their personal OSN accounts. The OSN intervention was structured into 24 weekly online modules for 6 months. Throughout each of the 24 weeks, the health educators posted lessons (one per week), audio recordings (one per week) and four comments per week in the discussion forum to highlight the major points of the lesson and responded with additional comments to problem solve with participants if necessary. Participants were instructed to post a minimum of four comments on the message board per week, and intervention compliance for the OSN participants was determined by participants meeting the four post minimum. While 'likes' show that participants were at least accessing the private group page, comments were considered a better indicator of compliance because of the increased interaction with the material required in order to post a comment. The content and discussion topics were identical for both the phone and OSN group; however, participants in the OSN group could access study materials and interact with group members 24 h·d À1 , 7 days/week and were able to work through study materials at a rate that was comfortable for them within the 1-week module guidelines. Participants were encouraged to interact with each other and with the health educator using the OSN. Phone conference call sessions Sixty-minute group phone conference meetings of 12-18 participants were conducted one evening per week (total 24 meetings) for 6 months. Participants called a toll-free number 5 min prior to the scheduled meeting time and entered a unique identifying code that allowed them to join the meeting. Participants were expected to remain on call for the duration of the 60-min session. The meeting protocol included a check-in question to generate discussion regarding diet and PA, a review of compliance with the diet and PA protocols, a lesson on a weight management topic and an experiential learning assignment that required problem-solving or the practice of behavioural weight management strategies to be completed prior to the next meeting. During the phone calls, participants in attendance were encouraged to actively participate in the call and interact with other participants. All calls were recorded and the audio posted for the private OSN group as optional learning material. Weight loss diet for phone and online social network groups For participants in both the phone and OSN groups, energy intake was reduced 500-700 kcal d À1 below estimated total daily energy expenditure using the equation of Miflin-St Jeor (32) using an activity factor for sedentary/low active individuals of 1.15. Nutritionally balanced, high volume, lower fat (fat = 20%) diets as recommended by the Academy of Nutrition and Dietetics (33) and the USDA's MyPlate approach (34) were prescribed for all participants. Participants were provided examples of meal plans consisting of suggested servings of grains, proteins, fruits, vegetables, dairy and fats based on their energy needs and were counselled on appropriate portion sizes. Physical activity for phone and online social network groups A progressive, moderate intensity PA programme (walking, jogging, biking, etc.) as recommended in the '2009 American College of Sports Medicine Position Stand on Physical Activity Interventions for Weight Loss and Prevention of Weight Regain in Adults', (35) was prescribed. PA progressed from 45 min/week in week 1 to 300 min·week À1 at the end of week 16, and remained at 300 min·week À1 for the duration of the study. Fitbit wireless activity monitor (Flex tracker, Fitbit Inc., San Francisco, CA, USA; size 35.5 × 28 mm) was provided to track steps, in addition to acting as a motivational tool and an incentive for participation in the intervention. Self-monitoring Participants in both the phone and OSN groups recorded body weight using a calibrated wireless digital scale (Withings Wireless Scale, WITHINGS Inc., Cambridge, MA, USA), diet/food consumed and minutes of PA using the MyFitnessPal ™ application and steps using a Fitbit activity monitor. Participants were instructed to record weight a minimum of once per week; however, daily weighing was encouraged. Fitbit and MyFitnessPal ™ data were to be submitted daily. All applications were synced to the MyFitnessPal ™ application, which uploaded the data to a cloud server. Self-monitoring data were available for real-time feedback to the participants through the applications as well as downloaded by the health educator to provide participant feedback and education. Self-monitoring weights from the wireless scale were used only for participant feedback and not for outcome weight. Assessments Outcome measures were collected in our laboratory by trained staff at baseline and following weight loss (month 6). Body weight, height, body mass index and waist circumference Body weight was recorded using a digital scale accurate to AE0.1 kg (Befour Inc Model #PS6600, Saukville, WI, USA). All participants were weighed between 0600 and 1000 h prior to breakfast wearing a standard hospital gown after attempting to void. Height was measured using a stadiometer (Model PE-WM-60-84, Perspective Enterprises, Portage, MI, USA), and BMI (kg m À2 ) was calculated. Waist circumference was measured using the procedures of Lohman et al. (36). Diet intake Baseline and 6 month energy and macronutrient intake were assessed using data obtained from the MyFitnessPal ™ . During the orientation session, all participants completed a tutorial on how to use and log diet information in the MyFitnessPal application using their personal device. Participants were instructed to log all food and beverages consume into the MyFitnessPal TM application for three consecutive days (two weekdays, one weekend day) prior to participants' scheduled assessment visit. MyFitnessPal TM would then sync the data to a cloud server which was then downloaded by a research staff. Physical activity A randomly selected subset of participants wore an ActiGraph GT1X portable accelerometer (ActiGraph LLC, Pensacola, FL, USA) on a belt over the non-dominant hip for seven consecutive days at baseline and month 6. Accelerometer data were collected in 1-min epochs with a minimum of 10 h constituting a valid monitored day. Participants with a minimum of three valid days were included in the analysis. The outcome variable was the average ActiGraph counts·min À1 . Cost analysis Resources used were measured by surveying participants at the end of the intervention (6 months) and by reviewing time logs maintained by health educators. Surveys were completed by 62 of the 70 participants (89%). Resource use data were converted to costs using standard prices (i.e. the median hourly wage in the area). A two-sample t-test was used to compare average costs for the conference call and OSN groups. Statistical analyses The primary analysis compared weight loss at 6 months between phone and OSN groups using sample t-test. Our intention-to-treat approach for handling missing data used multiple imputationi.e. 100 imputed datasets were generated and then analysis results from each imputed dataset were combined to make valid statistical inference. For secondary variables (change in BMI, percent change in weight, change in waist circumference and energy and macronutrient intake), two-sample t-test was used to compare between the groups without imputation. We examined the distribution of weight loss based upon categories of change between the groups using a chi-square test of homogeneity. Diet compliance (number of complete records submitted on MyFitnessPal ™ and number of diet records within AE100 kcals of the prescribed calorie goal) as well as self-report PA, steps (Fitbit) and accelerometer data were compared between the groups using two-sample t-test. Participants Baseline data for the 70 eligible participants who initiated the weight loss intervention (OSN, n = 34; phone, n = 36) are presented in Table 1. Sixty participants (86%) provided weight data at baseline and 6 months. The study sample had obesity (BMI~36 kg·m À2 ), were middle age (~47 years), predominantly female (~84%) and composed of~24% minorities. There were no baseline differences between participants randomized to phone or OSN groups. Obesity Science & Practice Phone vs. online social network E. A. Willis et al. 137 Body weight, body mass index, waist circumference Changes in weight, BMI and waist circumference are reported in Table 2. Weight change (kg) from baseline to 6 months was not significantly different between groups (p = 0.566). Weight change (%) from baseline to 6 months was À6.3 AE 6.4% and À5.8 AE 6.7% for phone and OSN groups, respectively (p = 0.765). No significant group differences were observed for BMI and waist circumference. At 6 months, the proportions of participants who gained weight and those who lost 0 to <5%, 5 to <10% and ≥10% weight also did not differ significantly between the phone and OSN groups (Table 3). Intervention compliance and dietary intake Attendance for the phone group from baseline to six months was 77.7% while compliance for the OSN group (≥4 comments/week) from baseline to 6 months was 29.5%. Overall engagement with the OSN group was quantified by the number of times participants 'liked' a study-related post and a post by a peer or posted a comment. The average number of 'likes' per person each week was 1.3. The average number of comments per person each week was 3.2. Participants in the OSN group reported listening to~36% of the posted audio lectures. The proportion of diet records submitted on MyFitnessPal ™ , records within AE100 kcals of the prescribed calorie goal, PA minutes (self-report) and steps (Fitbit) each week are shown in Table 4. Diet records submitted within AE100 kcals of the prescribed calorie goal were significantly lower in the OSN group (22.0 AE 21.2%) compared with the phone group (44.0 AE 26.2; p < 0.001). No significant differences were found between the groups for any other variables. Results of diet intake are shown in Table 5. There were no statistically significant differences between the phone and OSN groups for total energy, fat, carbohydrate and protein intake at baseline or 6 months. Physical activity The average target for minutes of PA was 232 min·week À1 from baseline to 6 months due to ramp up of the progressive protocol to 300 min·week À1 . Discussion Results from this pilot study indicated that weight loss from an intervention delivered through an OSN was not significantly different compared with an established cost-effective weight loss delivery system (group phone conference call). Both phone and OSN groups met the American Heart Association/American College of Cardiology/The Obesity Society's Guidelines (14) by reducing baseline weight by 5-10% within 6 months. These results compare favourably with results from Sepah et al. in which participants lost 5.0% of their baseline weight at 16 weeks and maintained 4.8% weight loss after 15 months (24). While there were no significant differences in mean weight loss or the proportions of participants who gained weight and those who lost 0 to <5%, 5 to <10% and ≥10% between participants randomized to OSN and phone, more individuals in the phone group lost >5% of their body weight, suggesting that an adequately powered trail is needed to compare weight loss between phone and OSN. Attendance could not be directly compared between groups because of the variation of the delivery methods and expectations. However, from baseline to 6 months, participants in the phone group attended~78% of the scheduled meetings while participants in the OSN group met the minimum four post recommendation per week criteria~30% of the time. Participants in the OSN group reported a lack of familiarization with other members limited their comfort of sharing information in the OSN group. Future studies need to address this issue. Possible solutions would have participants meet for a small number of face-to-face sessions prior to starting or throughout the OSN intervention to gain familiarity with one another. Compliance of weekly data reporting of steps, PA, weight and energy intake did not differ between groups. However, compliance with logging was low with both groups only submitting complete reports~36% of the time and reporting of energy intake data being the most frequently missed. Because adherence factors such as percent complete weekly reports, reports within AE100 calories of daily goal, number of weights submitted and steps have previously been shown to lead to greater weight loss (20,(37)(38)(39)(40)(41), future studies comparing phone and OSN should strive to increase adherence to the intervention protocol as greater adherence to these process variables could produce greater weight loss. Alternate diet strategies such as portion-controlled meals may be beneficial by reducing the barrier of tracking specific foods consumed. In the attempt to automate the intervention, energy intake outcome assessments and weekly data reporting were completed using the MyFitnessPal ™ application. Participants were instructed not to change their diet and were not given a reduced calorie goal prior to the first session. However, because of the low energy intake values recorded at baseline, it appears that nutritional feedback from the MyFitnessPal ™ application may have resulted in participants either underreporting or altering their diet during assessments. Thus, the values of energy intake should be interpreted with caution. Validated measures of energy intake such as photo-assisted dietary assessments may be used to assess daily energy and macronutrient intake and likely would improve the accuracy of dietary assessment (42). Furthermore, participants indicated problems understanding how to sync and navigate devices within their mobile applications. This could explain the low numbers for data reporting. Future studies might improve results with data compliance by increasing the frequency or implementing longer duration technology trainings. The total cost per participant for the 6-month intervention was~30% higher for the phone group compared with the OSN group. As cost is frequently cited as a barrier by participants, a cost saving of 30% (~$148) could represent a substantial savings for an individual wanting to loss weight and paying for such services out of pocket. In addition, provider cost was also substantially lower (~$574) in the OSN group, which could make this programme more fesible for providers to offer to individuals in lowincome settings or as a low-cost corporate wellness opportunity for employers. In addition, the freedom for participants to attend at their own leisure and for providers to deliver the programme on a flexible schedule may increase the number of qualified individuals who are not only interested in attending such programmes but also those who are interested in providing weight management. This may allow for more service providers to engage in weight management who were not previously able to due to time, cost or the constraints of reaching their cliental (i.e. remote/rural locations). Strengths of this study include (i) a design specific to evaluating two potentially effective strategies for the delivery of a weight loss intervention; (ii) both technologies evaluated are readily available and accessible. Thus, the interventions could be widely disseminated; (iii) phone comparison group is a successful technology for weight loss intervention delivery (12) and (iv) use of an established OSN. Using and existing popular OSN reduces the training required to manoeuver through the site as majority of the public is already familiar with the site. Additionally, it reduces the barrier of participants having to log into additional sites to obtain information. However, this study is not without its limitations. Because of funding and supply restraints, limitations include (i) sample size and the study was not powered to detect between group differences; (ii) lack of a weight maintenance follow-up period; (iii) inability to blind participants and health educators to treatment condition and (iv) potential differences in motivation to lose weight between participants who agree to participate in a research study and the general public. Despite these limitations, both groups did achieve the recommend weight loss of >5% within 6 months (14). Conclusion The purpose of this study was to inform future, larger scale, adequately powered trials. The primary finding from this investigation was that a weight loss intervention delivered using group conference calls and OSN resulted in clinically meaningful weight loss. Although selfmonitoring components essential to weight loss (such as reporting diet and PA) were low, these components were used equally across the groups. Future research is needed to determine ways to increase engagement and utilization of the intervention and self-monitoring methods to increase adherence and weight loss success. Furthermore, with minimal resources required to administer a behavioural weight loss intervention through OSNs, its utility for disseminating weight loss interventions to the public and supporting long-term weight loss maintenance should continue to be explored.
2018-04-03T04:52:19.530Z
2017-05-05T00:00:00.000
{ "year": 2017, "sha1": "5686c8cf8773bd1a80f405e867835683f09819a3", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1002/osp4.96", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5686c8cf8773bd1a80f405e867835683f09819a3", "s2fieldsofstudy": [ "Computer Science", "Education", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
246000089
pes2o/s2orc
v3-fos-license
Vulnerability to COVID-19, prejudice, and support for economic restrictions towards countries with high level of contamination Recent research has extensively investigated how the current COVID-19 pandemic can affect intergroup relations. Much less is known about the impact of COVID-19 on economic and trade decisions. Could the intergroup effects of this pandemic shape support for international economic policies? The aim of this study was to examine the support for restrictive economic policies towards countries with very high levels of COVID-19 contamination (China and Italy) during the first lockdown period (March April 2020). The survey was conducted in Romania (N = 669) and included measures of COVID-19 vulnerability, prejudice, and support for economic restrictive policy (e.g., to reduce international trade; to set higher taxes). Results showed that higher support for restrictive policies toward China was associated with greater perceived vulnerability to COVID-19 and this link was partially mediated by prejudice toward China. In contrast, support for economic restrictive policies toward Italy was greater when perceived vulnerability to COVID-19 was high, but this relationship between variables was not explained by negative attitudes towards Italy. Practical and theoretical implications are discussed. Introduction Vulnerability to COVID-19 has had a significant impact on international relations at different levels. The spread of COVID-19 around the world has created a sudden shortage of basic supplies needed to protect individuals from contamination and to provide medical care (e.g., masks, gloves, medical devices, drugs). In Europe, this has led national governments to realize the impact of relocating production to Asian countries where labor costs are cheaper. During this time of pandemic crisis, perceptions of overreliance on China -a major producer of products needed to cope with COVID-19 -were high in European countries. Followingly, the lockdown period was an opportunity for economists and politicians to discuss the potential economic restrictions that European countries could impose on Asian countries and especially China (La Tribune, 2020). While the pandemic context creates favorable conditions for economic restrictions on China, research in social psychology has shown that negative intergroup attitudes can develop as a result of diseases' threats (Kim et al., 2016;Navarrete & Fessler, 2006). Social Identity Theory (SIT; Tajfel & Turner, 1986) shows that to achieve or maintain a positive social identity people favor their in-group and discriminate out-groups. This out-group discrimination appears when one's social identity is salient and when relations with other groups are competitive or conflictual (Abrams & Hogg, 2010). But also, having a stable group identity is necessary to build meaningful frames of oneself and navigate the social world while maintaining a positive social identity (Hogg, 2007). In a crisis such as a global pandemic caused by a new virus, everything becomes less predictable and stable, and one's group identity can become vulnerable (Abrams et al., 2021). When one's group identity is threatened (here, by an unstable and unpredictable pandemic situation), members will identify more with their in-group but also differentiate themselves more from out-groups (and even more when the out-groups can be identified as a possible origin of the threat), and thus express more prejudice (Kruglanski et al., 2021). In turn, negative intergroup attitudes are known to influence consumer behavior and support for different economic policies (Klein, 2002). Could part of the speech advocating economic restrictions towards foreign countries during COVID-19 contamination be explained by prejudice? Recent evidence does suggest that the effects of COVID-19 already extended to individuals' support for restrictive social policies towards threatening out-groups (Zhai & Du, 2020. In a recent study, Alston et al. (2020) show that UK participants' prior intergroup contact with Chinese individuals predicted their support for purely discriminatory state policies towards Chinese nationals in the UK (e.g., closing down all Chinese restaurants). Yet, no study so far has investigated whether support for economic policies could be similarly shaped by prejudice, catalyzed in the context of a global pandemic like COVID-19. In the present research, we examined the attitudes of Romanian citizens towards an Asian country (China) and towards a European one (Italy), two countries with a high level of contamination before and during the lockdown period (Liu et al., 2020). We will begin by reviewing empirical studies concerning the link between vulnerability to disease and prejudice against outgroups. We will then examine the impact of prejudice on economic attitudes towards other countries. Finally, we will present the current research and hypotheses we have developed. Vulnerability to diseases and prejudice against out-groups From an evolutionary perspective, there is a link between the ancestral adaptive utility of avoiding potentially dangerous encounters with the contemporary tendency to perceive foreigners negatively, by expressing prejudice and discrimination against them (Faulkner et al., 2004;Schaller, 2003). Indeed, in ancestral environments, one adaptive behavior has been the avoidance of individuals who were thought to be carriers of contagious diseases. Individuals from foreign lands were considered as more likely to display customs that favor the transmission of disease (related to hygiene, food preparation, etc.), and to engage in practices that violate rules adopted by in-group members to protect themselves from prevalent pathogens (Faulkner et al., 2004;Schaller, 2003). Recent research confirms that people feel more vulnerable to disease transmission during an interaction with an out-group member rather than with an in-group member (Greenaway & Cruwys, 2019;Smith & Gibson, 2020;Zhai & Du, 2020). Even when the risk of infection is potentially the same, individuals perceive more threat when the risk behavior is carried out by a member of the out-group than by a member of the in-group. For example, found that when a person has a cold and leaves a used handkerchief in a shared workplace, participants perceive a higher risk of contamination if that person belongs to an out-group than if he/she belongs to an in-group. Consequently, to protect themselves against diseases that they had not yet developed immunity to, individuals avoid out-group members and therefore exhibit more negative attitudes against them (Navarrete & Fessler, 2006;Reny & Barreto, 2020). Many empirical studies examined the link between perceived vulnerability to diseases and intergroup attitudes. In a recent article, a nationally representative sample of Americans completed a survey, indicating their perceptions of vulnerability to Ebola, and their xenophobic tendencies (Kim et al., 2016). Results showed that the more vulnerable people felt, the more they exhibited prejudice toward populations coming from directly affected regions in West Africa. Research has also shown that COVID-19 threat perceptions were associated with increased negative attitudes towards foreign groups . Infection avoidance tendency predicted increased exclusionary attitudes towards foreigners in Japan (Yamagata et al., 2020), and increased xenophobic and anti-Syrian immigrant attitudes in Turkey (Adam-Troian & Bagci, 2020). Integrated Threat Theory (ITT) considers that threat and fear are predictors of prejudice and that there are four different types of threat that underlie negative evaluations of out-groups: realistic threat, symbolic threat, negative stereotypes and intergroup anxiety (Stephan & Stephan, 2000). Realistic threats encompass potential dangers to the physical well-being, economic and political power of one's group. The global pandemic of COVID-19 is a realistic threat that can impact the well-being or the power of one's group. To survive (i.e., evolutionary perspectives), to maintain a positive social identity (i.e., Social Identity Theory) and to protect their group's interests, individuals will develop prejudice and discrimination towards threatening out-groups (Gonzalez et al., 2008;Nshom & Croucher, 2017). Moreover, affective and motivational factors play an important role in intergroup relations (Correnblum & Stephan, 2001). Indeed, according to ITT, perceptions of threat directly relate with perceptions of vulnerability, which increase expression of prejudice as a way to reduce experienced anxiety (Demirtas-Madran, 2020; Nshom & Croucher, 2017;Tabri et al., 2020). Faulkner and collaborators (2004) also examined the extent to which xenophobic attitudes are predicted by selfreported perceptions of vulnerability to disease in general, among Canadians. In their study, participants who perceived themselves to be vulnerable to diseases were more likely to associate foreign people with danger and had less favorable attitudes towards foreigners. Interestingly, the link between vulnerability to diseases and intergroup attitudes was moderated by the perceived familiarity of out-groups. Specifically, perceived vulnerability to disease predicted more xenophobic attitudes towards out-groups that were subjectively perceived as less familiar (e.g., immigrants coming from Africa, Peru, Qatar, and Sri Lanka). In contrast, perceived vulnerability was not related to prejudice against more familiar out-groups (e.g., European immigrants; Faulkner et al., 2004). It is assumed that in our study, Romanian participants perceive Italy in terms of a historical, more familiar, superordinate European in-group while perceiving China as an out-group. In addition, Chinese can be more associated with the threat of COVID-19, compared to other groups. Indeed, in a recent study, a specific link has been found between anti-Asian attitudes and the fear of the COVID-19, among Americans (Reny & Barreto, 2020). Authors analyzed the relationship between attitudes toward Asian Americans and toward other American racial groups, and participants' concern about contracting the coronavirus. Results showed that the vulnerability to COVID-19 was related to prejudice against Asians and not against other racial out-groups, meaning the threat was perceived as coming from Asia, where the virus first expanded itself. In line with these previous works, we therefore expect that perceived vulnerability to COVID-19 is more likely to be associated with prejudice against Chinese, perceived as unfamiliar out-group members. Besides, vulnerability to COVID-19 was also associated with increased authoritarianism and national conservatism (Bieber, 2020). Prejudice and economic attitudes Xenophobic attitudes have an important impact on consumers' economic attitudes. People may prefer to buy ingroup rather than out-group products in order to favorize the national economy (Klein, 2002). However, national in-group favoritism is not only explained by material concerns (resource maximization for the in-group). National preference in the economic field is also tied with individuals' motivation to obtain or to maintain a positive national identity. Favorizing domestic production is seen as a way to improve the socio-economic status of one's country in relation to other foreign countries, and, in doing so, boosts one's positive evaluations of one's national identity. The COVID-19 pandemic is not only a realistic threat to the health but also to the economic domain. Therefore, COVID-19 crisis caused a "depressive socio-economic environment" (Elias et al., 2021, p. 785), where individuals will favor nationalism and conservatism to maintain a positive social identity. Nationalism and conservatism in times of crisis lead to more support for authoritarian policies (e.g., antiimmigrant policies and exclusionary nationalism; Hartman et al., 2021;Green et al., 2010) even more so towards outgroups perceived as culturally and economically threatening (Duckitt & Sibley, 2010). In the same perspective, researchers have examined a general form of prejudice, "consumer ethnocentrism", and its consequences on economic attitudes. Consumer ethnocentrism captures the economic motives for in-group bias, such as the fear that choosing foreign products threatens the national industry and causes high unemployment (Verlegh, 2007;Sharma, 2011). For example, in a study conducted in the United States, consumer ethnocentrism was the dominant factor in choosing between a domestic good and a Japanese good (Klein, 2002): the higher the Americans scored on the ethnocentrism scale, the more they preferred national products. It is possible that prejudice might influence support for restrictive economic policies towards the two countries (Italy and China). However, prejudice against Chinese in Europe is high (e.g., Mahfud et al., 2016), while attitudes towards Italians are rather positive. This difference in terms of prejudice between Chinese and Italians is explained by the perception of cultural distance (Mahfud et al., 2016). Indeed, perceived cultural distance (e.g., regarding values, religion, language, family, and marriage life, etc.) is associated with stronger feelings of threat (Goto & Chan, 2005;Mahfud et al., 2018) and is considered as a determinant of intergroup relations and an important factor for negative intergroup attitude (Allport, 1954;Brewer & Campbell, 1976). Studies have shown that the greater the perceived cultural distance towards a group, the more prejudice individuals will express against it (Mahfud et al., 2016;Mahfud et al., 2018;Schalk-Soekar & Van de Vijver, 2008). When perceiving a high cultural distance toward an out-group, the familiarity will, therefore, be lessened (Goto & Chan, 2005). According to the association between perceived vulnerability to disease and discrimination (Faulkner et al., 2004), Romanian participants should feel more familiar with Italians, and thus express fewer prejudices than toward the Chinese group. Furthermore, negative economic attitudes towards China are encouraged by European institutions as a form of protectionism. For example, the report of the European Court of Auditors (September 9, 2020) evaluated Beijing's investments in Europe, over the last decade, at 150 billion euros. The rapporteurs invited European countries to adopt a common strategy and to question the consequences of Chinese takeovers in sectors such as energy, telecoms, or ports, evoking a real "threat" (La Croix, 2020). We predict that participants' prejudice towards China might be higher, and consequently, they might express higher support for restrictive economic policies against China than against Italy. The present study The objective of this research is to examine the support of Romanian citizens for restrictive economic policies towards countries with very high levels of COVID-19 contamination (China and Italy) during the first lockdown period (March -April 2020). The survey includes measures of perceived COVID-19 vulnerability, intergroup prejudice, and support for restrictive economic policy (to reduce international trade; to set higher taxes). We formulate the following hypotheses: (1) Support for restrictive policies toward the Asian country (China) should be greater when perceived vulnerability to COVID-19 is high and when prejudice toward Chinese people is important. (2) Also, the effect of COVID-19 vulnerability on support for economic restrictions towards China should be mediated by negative intergroup attitudes. (3) Support for economic restrictive policies toward the European country (Italy) should be greater when perceived vulnerability to COVID is high. Participants The sample consisted of Romanian citizens who voluntarily took part in the study (N = 669, M age = 38.3, SD age = 12.2, 55% females). Participants completed a survey in Romanian. Questionnaires were translated from English and back translated by native speakers. The survey was distributed online through Qualtrics. Measures are presented below 1 . Data and supplemental materials are publicly accessible on OSF: https://osf.io/w5s92/?view_only=242e0264d268431a84ea3e73 07ccfae5 The procedure meets the European ethical requirements. Measures Perceived vulnerability to COVID-19 risk. Questions were adapted from the Perceived Risk Scale (Napper, et al., 2012;Kim et al., 2016). Three items assessed perceptions of personal risk (e.g., "I feel vulnerable to COVID-19 infection"), three items assessed perceptions of the local community's risk (e.g., "I feel that people in my local community are vulnerable to COVID-19 infection"), and perceptions of risk to the country (e.g., "I feel that my country is vulnerable to outbreak of COVID-19"). We calculated a unique score of perceived vulnerability, higher score indicating greater vulnerability (α = .81; Likert, 7 points). Prejudice. Participants indicated their feelings towards citizens from two countries (one European country -Italy, and one Asian country -China) by responding to six items from a scale designed to assess prejudice. The measure included four positive feelings (acceptance, sympathy, warmth, liking -all reversed) and 2 negatives feelings (hostility, fear), as used in Stephan et al. (1998; attitudes towards Chinese α = .70; attitudes towards Italians α = .70; Likert, 7 points). Higher score indicates higher prejudice. Support for restrictive economic policy. A set of three items measured participants' support for restrictive economic policy for a European country (e.g., "I would like Romania to reduce trade with Italy as much as possible"; "I would like to see higher taxes (customs duties) on products from Italy.", (α = .80) and for an Asian country (e.g., "I would like Romania to reduce trade with China as much as possible"; "I would like to see higher taxes (customs duties) on products from China.", (α = .80; all Likert, 7 points). Higher score indicates higher economical restriction Results Descriptive statistics are presented in the [-.36, -.20], d = 0.28. We also note that prejudice towards China highly correlated with prejudice towards Italy, suggesting a form of ethnocentrism. In addition, the more participants supported restrictive economic policies against China, the more they agreed with higher restrictions against Italy. . Paired-samples t-test showed that prejudice against Chinese was significantly higher that prejudice against Italians, t(668) = 12.43, p < .001, 95%CI [-.56, -.41], d = 0.48. By the way, support for restrictive economic policies towards China was stronger than support for these type of policies towards Italy, t(668) = 5.56, p < .001, 95%CI [-.29, -.13], d = 0.21. According to our hypotheses, support for restrictive policies toward China should be greater when perceived vulnerability to COVID-19 is high and this link should be mediated by prejudices toward Chinese. Support for restrictive economic policies toward Italy should be greater when perceived vulnerability to COVID is high. In order to test these hypotheses, we conducted regression analysis examining separately support for restrictive economic policy toward China and toward Italy. Support for restrictive economic policy toward Italy We hypothesized that perceived vulnerability to COVID-19 would be related to higher support for restrictive economic policy against Italy. We also explored whether the link between these two variables was mediated by prejudice against Italy, as it was the case for China. Simple regression analysis, showed that the impact of perceived vulnerability to COVID-19 on support for restrictive economic policies was significant, b = .21, SE = .04, t(667) = 4.79, p < .001, ηp 2 = 0.03. However, the link between perceived vulnerability to COVID-19 and prejudice against Italy was not significant, b = .01, SE = .04, t(667) = 0.66, p = .509, ηp 2 = 0.001. Therefore, the conditions were not satisfied to further test the mediation model. Discussion In this study we examined the link between vulnerability to COVID-19, prejudice, and support for economic restrictions towards countries with high levels of contamination. We showed that support for restrictive economic policies toward an Asian country (China) was greater when perceived vulnerability to COVID-19 was higher and when prejudice toward Chinese people was important. The more participants perceived vulnerability to COVID-19, the more they supported restrictive economic policies against China. And the more participants expressed prejudice against Chinese people, the more they wanted their country to impose economic restrictions on this out-group. In addition, the link between perceived vulnerability and support for economic restrictions was partially mediated by prejudice. Results also showed that support for restrictive economic policies toward a European country (Italy) was greater when perceived vulnerability to COVID-19 was higher. Our results are in line with other research showing that ethnocentrism increases when one feels vulnerable to a disease, especially through prejudice and discrimination against foreign groups (Navarrete & Fessler, 2006;Hodson & Costello, 2007;Meleady et al., 2021). This study presents several limitations. First, the data are correlational, preventing any conclusions in terms of cause and effect. It is not clear whether the fear of COVID-19 increases negative intergroup attitudes or vice versa. It is also unclear whether prejudice increases support for restrictive economic policies or the reverse. However, theoretical models suggest that fear of disease in general may increase prejudice towards out-groups (e.g., Integrated Threat Theory; Gonzalez et al., 2008;Kim et al., 2016;Navarrete & Fessler, 2006), and that prejudice, in turn, influences consumer behavior -in this case support for discriminatory economic policies (Klein et al., 1998;Klein, 2002). Another limitation is that we only indirectly tested the theoretical assumptions on which we have built our reasoning. Indeed, we did not measure the cultural distance perceived by the Romanians between the ingroup and the Chinese or Italian outgroups. We did not measure the COVID-19 threat from these countries, China and Italy in particular. Although these theoretical assumptions have been empirically proven in empirical studies, future research should directly test the role of perceived threat or cultural distance in the link between disease vulnerability and intergroup attitudes. Since immemorial times, communities have been implementing strategies to survive in the face of disease. They perceive members of out-groups as carriers of pathogens that should not be allowed to enter the territory of the in-group (Navarrete & Fessler, 2006). Moreover, individuals' perception of vulnerability to health threat is influenced by group processes . Under conditions of high threat, cooperation and trust are essential within a group to cope with the uncertainty of the situation (Eder et al., 2020;Schaller, 2011). Prejudice and discrimination against outgroups will also be higher in these circumstances (Reny & Barretto, 2020;Schaller, 2011;Tajfel & Turner, 1986). For example, Americans showed more xenophobic attitudes toward foreigners during the Ebola epidemic. The more disgust and fear they felt about the disease, the more they called for government interventions to restrict the entry of individuals from West Africa into the US (Kam, 2019). Measures taken by some governments during the first lockdown period (March-April 2020) illustrate this model of in-group protection, already demonstrated in evolutionary psychology. But the in-group favoritism goes beyond COVID-19. Certainly, there are objective reasons for relocating on the national territory the manufacture of products needed to cope with a pandemic. Indeed, the importance of being self-reliant and reducing dependency on China was discussed in many countries at the onset of this sanitary crisis (Verma & Naveen, 2021). However, what the present study suggests is that the pandemic is also an opportunity to impose significant restrictions in all sectors on a country that has become an economic threat, which has nothing to do with the COVID-19 crisis (see for instance, La Croix, 2020). It is no longer the fear of COVID-19 that explains the support for economic restrictions but the prejudices that individuals may have towards an out-group. Beyond the obvious implications regarding the psychological predictors of economic restrictions towards foreign countries, the present results suggest a pathway through which intergroup attitudes and pathogen threat may lead to political preference for authoritarian leaders. These often promote both social and economic conservative policies, especially during societal crises (Sprong et al., 2019). In critical times, people seek support from leaders that are considered stronger, more action-oriented, and possibly more authoritarian than those they usually seek (Abrams et al., 2021;Adam-Troian et al., 2020;Karwowski et al., 2020). In fact, studies showed that health threats are related to higher support for traditionalism and political conservatism (for a meta-analysis, see Terrizzi et al., 2013). Stronger family ties, greater social conservatism and higher ethnic conflicts are observed in regions with a history of higher levels of diseases (Meleady et al., 2021). Therefore, future research should weigh how intergroup and threat perceptions shape preferences for both types of policies (conservatism versus liberalism). If the mechanisms underlying support for economic sanctions overlap with those promoting intergroup conflicts in general, then interventions designed to increase reconciliatory intergroup attitudes could also be used to target support for economic policies (Bar-Tal & Hameiri, 2020). It is therefore important to identify the motivations that determine citizens' support for restrictive economic policies beyond the present situation. International cooperation could allow a better management of social and health inequalities and a first step would be to increase the fight against prejudice and discrimination. Abrams, D., & Hogg, M. A. (2010). Social identity and selfcategorization (pp. 179-93). The SAGE handbook of prejudice, stereotyping and discrimination. Abrams, D., Lalot, F., & Hogg, M. A. (2021)
2022-01-09T16:28:03.350Z
2021-12-30T00:00:00.000
{ "year": 2021, "sha1": "af73e4107b1e7d56905f58ae7104348af0098fdd", "oa_license": "CCBY", "oa_url": "https://doi.org/10.24913/rjap.23.2.01", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "a6127bba9d5ab9c9961d5d7c3a3d42646c8fc86a", "s2fieldsofstudy": [ "Political Science", "Economics" ], "extfieldsofstudy": [ "Medicine" ] }
263036813
pes2o/s2orc
v3-fos-license
Crystal structure of hexaaquanickel(II) bis{5-bromo-7-[(2-hydroxyethyl)amino]-1-methyl-6-oxidoquinolin-1-ium-3-sulfonate} monohydrate The packing of the title compound is built up by columns of π–π stacking quinoline derivatives running along the c axis, which are interconnected by [Ni(H2O)6]2+ complex cations through hydrogen bonding. Chemical context Among heterocyclic rings, the quinoline ring system is of great importance due to its therapeutic and biological activities. Many new quinoline derivatives have been synthesized and used as new potential agents to treat HIV (Cecchetti et al., 2000;Tabarrini et al., 2008) and malaria (Nayyar et al., 2006) or to inhibit human tumor cell growth (Rashad et al., 2010). Recently, a simple aminoquinoline derivative has been used in colorimetric sensors for pH (Wang et al., 2014). In addition, complexes of quinoline compounds with transition metals are also known to exhibit a wide variety of structures and possess profound biochemical activities which allow them to act as antimicrobial, anti-Alzheimer's (Deraeve et al., 2008) or antitumoral agents (Yan et al., 2012;Kitanovic et al., 2014). Some complexes of polysubstituted quinoline compounds have also been used in dye-sensitized solar cells or in efficient organic heterojunction solar cells . The new quinoline derivative (6-hydroxy-3-sulfoquinolin-7yloxy)acetic acid (Q) was synthesized from eugenol and its antibacterial activities have been reported (Dinh et al., 2012). From Q, a series of polysubstituted quinoline compounds has ISSN 2056-9890 been synthesized, including 5-bromo-6-hydroxy-7-[(2-hydroxyethyl)amino]-1-methyl-3-sulfoquinoline (QAO). As polysubstituted quinoline rings are known to coordinate to metal ions, the reaction between QAO and NiCl 2 was studied. The reaction product could not be characterized unambiguously by IR or 1 H NMR spectroscopy. Although the obtained spectroscopic data are different from those of free QAO, indicating the presence of a deprotonated hydroxyl group, no conclusion about complex formation was possible and further investigation by X-ray diffraction was necessary. A solution containing NiCl 2 Á6H 2 O (262 mg, 1.1 mmol) in 10 mL water was added dropwise to 15 mL aqueous solution of QAO (754 mg, 2 mmol) and NH 3 (pH ' 6-7). The obtained solution was stirred and refluxed at 313-323 K for three h. The brown precipitate was collected by filtration, washed consecutively with ethanol and dried in vacuo. The obtained crystals were soluble in water and DMSO, but insoluble in ethanol, acetone and chloroform. The yield was 60%. Single crystals suitable for X-ray investigation were obtained by slow evaporation from a ethanol-water (1:2 v/v) solution at room temperature. Refinement Crystal data, data collection and structure refinement details are summarized in Table 2. H atoms for N18, O21, O23, O24, O25 and O26 were located in difference Fourier maps. The coordinates of H21 and H26 were refined freely, while the other H atoms were refined as riding. All C-bound H atoms were placed at idealized positions and refined as riding, with C-H distances of 0.95 (aromatic), 0.99 (methylene) and 0.98 Å (methyl). For most H atoms, U iso (H) values were assigned as 1.5U eq of the parent atoms (1.2U eq for H2, H4, H10, H18, H19A/B and H20A/B). Figure 4 Partial packing diagram of the title compound viewed along the a axis, showing the X-HÁ Á ÁO hydrogen bonds (red dotted lines, see Table 1 for details) and C-HÁ Á ÁBr interactions (brown dotted lines). SHELXS97 (Sheldrick, 2008); program(s) used to refine structure: SHELXL2014 (Sheldrick, 2015); molecular graphics: OLEX2 (Dolomanov et al., 2009); software used to prepare material for publication: OLEX2 (Dolomanov et al., 2009). Special details Geometry. All esds (except the esd in the dihedral angle between two l.s. planes) are estimated using the full covariance matrix. The cell esds are taken into account individually in the estimation of esds in distances, angles and torsion angles; correlations between esds in cell parameters are only used when they are defined by crystal symmetry. An approximate (isotropic) treatment of cell esds is used for estimating esds involving l.s. planes.
2018-04-03T01:54:55.052Z
2016-08-05T00:00:00.000
{ "year": 2016, "sha1": "ad3faa2fdf418acf426249841bed38002627b858", "oa_license": "CCBY", "oa_url": "http://journals.iucr.org/e/issues/2016/09/00/is5459/is5459.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ad3faa2fdf418acf426249841bed38002627b858", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Computer Science", "Chemistry" ] }
3775465
pes2o/s2orc
v3-fos-license
Facile Synthesis of Indium Sulfide/Flexible Electrospun Carbon Nanofiber for Enhanced Photocatalytic Efficiency and Its Application Heterojunction system has been proved as one of the best architectures for photocatalyst owing to extending specific surface area, expanding spectral response range, and increasing photoinduced charges generation, separation, and transmission, which can provide better light absorption range and higher reaction site. In this paper, Indium Sulfide/Flexible Electrospun Carbon Nanofiber (In2S3/CNF) heterogeneous systems were synthesized by a facile one-pot hydrothermal method. The results from characterizations of SEM, TEM, XRD, Raman, and UV-visible diffuse reflectance spectroscopy displayed that flower-like In2S3 was deposited on the hair-like CNF template, forming a one-dimensional nanofibrous network heterojunction photocatalyst. And the newly prepared In2S3/CNF photocatalysts exhibit greatly enhanced photocatalytic activity compared to pure In2S3. In addition, the formation mechanism of the one-dimensional heterojunction In2S3/CNF photocatalyst is discussed and a promising approach to degrade Rhodamine B (RB) in the photocatalytic process is processed. Introduction Nowadays, it is a huge challenge for people to deal with the organic pollutant in the energy crisis environment [1][2][3]. Certainly, photocatalytic as a novel solution has aroused great interest for people. It has been considered as one of the most effective ways for the solar energy conversion and the destruction of organic pollutant [4,5]. Up to now, numerous experiments of the degradation of organic pollutants by using photocatalysts have been researched. However, the photocatalytic activity of pure photocatalyst is limited by its low efficiency of light absorption, difficult migration, and high recombination probability of photogenerated electronhole pairs, and the development of photocatalytic technology is still limited for the photocatalyst [6][7][8]. Therefore, it is urgent and indispensable to find a novel photocatalyst to improve both the photochemical activity and the stability. In 2 S 3 , as a typical III-VI group sulfide, is an n-type semiconductor with a band gap of 2.0-2.3 eV corresponding to visible light region which attracted intense interest for optical, photoconductive, and optoelectronic applications. Furthermore, In 2 S 3 shows property of high photosensitivity and photoconductivity, stable chemical and physical characteristics, and low toxicity; it has great potential for visiblelight-driven photodegradation of pollutants [9,10]. Realistically, the narrow band gap and the rapid recombination of photogenerated electron-hole pairs causing poor quantum yield are similar to other visible light photocatalysts [11][12][13][14]. To meet the practical application requirements, it is urgent and important to enhance the photocatalytic efficiency of In 2 S 3 . Up to now, many attempts have been explored to improve the photocatalytic performance of In 2 S 3 , such as metal ions doping, coupling with other semiconductors, and carbon materials-based assemblies [15,16]. As a viable alternative route to boost the efficiency of photocatalysts, CNF-based assemblies have aroused attention [17][18][19]. CNF is easily synthesized by electrospinning with a large theoretical specific surface area and high intrinsic electron mobility; it possesses physicochemical, superior electronic, mechanical character, and high absorption properties. In particular, compared with traditional carbon nanofibers obtained by other physical and chemical methods, the carbon nanofibers synthesized by electrospinning (CNF) have stronger electronic transport properties [20,21]. Therefore, it is an ideal method to enhance photocatalytic activity by coupling In 2 S 3 with CNF to construct In 2 S 3 /CNF. In this work, the CNF was fabricated by electrospinning technique, and In 2 S 3 /CNF composites were fabricated through a one-pot hydrothermal reaction as shown in Scheme 1. The photogenerated electrons on the conduction bands (CB) of In 2 S 3 could easily be transferred to CNF for the positive synergetic effect, in brief, because the formation of interface junction can improve the optical absorption property and simultaneously facilitate the separation of photoinduced electron-hole pairs. In addition, the promising applications of In 2 S 3 /CNF composites have excellent performance for the degradation of organic pollutants. This study shows a reliable method to degrade organic pollutants. Experimental Section 2.1. Materials. All the reagents were of analytical grade and were used as received without further purification. InCl 3 ⋅5H 2 O, thioacetamide (TAA), and other chemicals were of analytical grade and purchased from Sinopharm Chemical Reagents Co., Ltd. Polyacrylonitrile (PAN) ( = 150,000 g mol −1 ) was purchased from Sigma-Aldrich. Fabrication of CNF. According to previous reports [22], PAN nanofiber was synthesized from PAN by a modified electrospinning method. Firstly, 1 g PAN was dissolved completely in 9 mL N,N-dimethylformamide (DMF). Then, the mixture was transferred to 5 mL plastic syringe by two times for electrospinning (voltage: 20 kV, injection rate: 0.2 mm min −1 ). In order to obtain CNF, the PAN was carbonized at 500 ∘ C for 2 h under an inert atmosphere with a heating rate of 2 K min −1 . 2.3. Fabrication of In 2 S 3 /CNF. In 2 S 3 /CNF with different In 2 S 3 loadings was then prepared by a facile one-pot hydrothermal method. Briefly, a certain amount of InCl 3 ⋅5H 2 O (351, 702, or 1053 mg) and thioacetamide (120 mg) was dissolved in ethyl alcohol (40 mL) under ultrasound conditions. The CNF (50 mg) was then immersed in the above solution, which was then transferred to a Teflon-lined autoclave and heated in a homogeneous reactor at 180 ∘ C for 12 h. According to this method, different weight ratios of the In 2 S 3 to g-CNF samples were synthesized and labeled as In 2 S 3 /CNF-1, In 2 S 3 /CNF-2, and In 2 S 3 /CNF-4, respectively. By controlled trial, the In 2 S 3 was fabricated by the same method. Characterization. Scanning electron microscopy (SEM; Hitachi S-4800) coupled with X-ray energy dispersive spectroscopy (SEM-EDS) and transmission electron microscopy (TEM; Hitachi H600) were used to observe the morphology, structure, and size of the In 2 S 3 /CNF and its components. The effect of the In 2 S 3 and CNF contents of In 2 S 3 /CNF on its structural properties were investigated by X-ray photoelectron spectroscopy (XPS; Axis Ultra HAS), Raman (Raman; Axis Ultra HAS), and X-ray diffraction (XRD; X' Pert-Pro MPD). The optical properties and the dye concentration were determined by UV-visible diffuse reflectance spectroscopy (UV-vis DRS, Shimadzu UV-3600). Photocatalytic Activity Measurements. The photocatalytic activities of samples were evaluated by measuring the photodegradation of Rhodamine B (RB) under visible light. In a typical measurement, 40 mg photocatalysts were suspended in 100 mL of 50 ppm aqueous solution of RB. The solution was stirred in the dark for 30 min to obtain a good dispersion and to reach the adsorption-desorption equilibrium between the organic molecules and the catalysts surface [23]. Then the suspension was illuminated with a 250 W xenon lamp. The concentration change of RB was monitored by measuring the UV-vis absorption of the suspensions at regular intervals (take samples every 10 minutes). The suspension was filtered to remove the photocatalysts before measurement. The concentrations of RB in the reacting solutions were analyzed at = 554 nm [24]. The photocatalytic activity was analyzed by the time profiles of / 0 , where is the concentration of RB at the irradiation time and 0 is the concentration in the absorption equilibrium of the photocatalysts before irradiation, respectively. The normalized temporal concentration changes ( / 0 ) of RB are proportional to the normalized maximum absorbance ( / 0 ), which can be derived from the change in the RB absorption profile at a given time interval [25]. Results and Discussion The X-ray diffraction (XRD) patterns of pure CNF, In 2 S 3 , and In 2 S 3 are shown in Figure 1. All of the diffraction peaks can be indexed to that of In 2 S 3 with a cubic phase structure (JCPDS, number 32-0456). Peaks at 2 of 14, 27, 33, 44, 48, 56, and 60 ∘ in the XRD patterns of In 2 S 3 /CNF-2 and In 2 S 3 correspond to the (111), (311), (400), (511), (533), and (444) planes of In 2 S 3 , respectively. The XRD patterns of the In 2 S 3 /CNF-2 heterostructures show all the diffraction peaks assigned to hexagonal In 2 S 3 except the peak at 25 ∘ which corresponds to (130) plane of orthorhombic CNF, indicating the existence of In 2 S 3 and CNF in the In 2 S 3 /CNF-2 heterostructures. Moreover, the intensities of the corresponding diffraction peaks of In 2 S 3 strengthened gradually along with the addition of the CNF in the In 2 S 3 /CNF-2 composites; the formation of heterostructures can be demonstrated. The morphology of the In 2 S 3 and In 2 S 3 /CNF-2 was analyzed by SEM and TEM. The flower-like In 2 S 3 with an average diameter of 5 um possesses porous structures due to the aggregation of a certain amount of nanosheets (Figure 2(a)). The TEM image of Figure 2(b) further confirms the result. The SEM image of electrospun CNF is shown in Figure 2(c), which shows that the average diameter is about 300 nm and there is no defect in a smooth surface. As shown in TEM images (Figure 2(d)), it is clear that the surface of In 2 S 3 /CNF-2 is uniformly covered by the ultrathin In 2 S 3 nanosheets after hydrothermal treatment. Further, there is no aggregation found in the surface of In 2 S 3 /CNF-2 composites. The EDX spectrum shown in Figure 2(e) reveals the presence of In and S elements in a mass fraction ratio of 4.47% : 1.61%, which is close to the expected stoichiometry for In 2 S 3 (Au signal is from FTO substrate). Figure 3 shows that the different concentration of In 2 S 3 deposited on the surface of CNF nanofibers. A small amount of nanoplate-like In 2 S 3 was found on the smooth surface of CNF nanofibers, which correspond to low concentration. As the concentration increases (Figures 3(c) and 3(d)), In 2 S 3 nanosheets with curled shapes grow vertically on the nanofiber surface and with a uniform distribution. In addition, the surface of nanofiber also turns from smooth to rough. As shown in Figures 3(e) and 3(f), serious aggregation occurred and thick layer In 2 S 3 nanosheets were observed after further increasing the In 2 S 3 concentration. The rapid nucleation of In 2 S 3 at high concentration can be demonstrated. XPS measurements were carried out to testify the chemical composition and chemical states of elements in In 2 S 3 /CNF-2 heterostructure photocatalyst [26]. The fullscale XPS spectrum for In 2 S 3 /CNF-2 sample is shown in Figure 4(a), in which the In, S, and C elements could be detected and no other impurities were observed. Figures 4(b), 4(c), and 4(d) show the high-resolution XPS spectra for In 2 S 3 /CNF sample. The XPS peaks (Figure 4 results confirm that the composites are composed of In 2 S 3 and CNF. Raman analysis was explored to confirm the presence of CNF and In 2 S 3 in In 2 S 3 /CNF-2 sample ( Figure 5). D-peak (D band) represents the defects of C atomic lattice, and G-peak (G band) represents the expansion vibration of the surface of C atom sp 2 hybridization. And the representative Raman spectrum in a range of Raman shift from 100 to 2000 cm −1 of the CNF shows mainly two peaks centered around 1369 cm −1 (D band) and 1590 cm −1 (G band) for CNF. Furthermore, the degree of fibrosis can be measured by the intensity ratio of the G to D band ( G / D ) [28][29][30], where G / D is the intensity ratio of D-peak and G-peak. A slight increase in the G / D ratio is observed in the spectrum of In 2 S 3 /CNF-2 composites, the D/G integral intensity ratio ( D / G ) for CNF in the In 2 S 3 /CNF-2 sample (1.13) is slightly higher than that of CNF (1.12), it is indicated that a certain amount of In 2 S 3 deposited on the surface of CNF during the chemical reduction process, and the conjugated CNF network was reestablished [31]. The two peaks for D and G band of the composite no shift appears, indicating that only a small amount of In 2 S 3 deposited on the surface of CNF. The optical properties of the three samples were detected by UV-vis DRS absorption spectroscopy ( Figure 6). Obviously, CNF shows the best performance and its absorption peaks appear in the visible light and UV light regions. It should be noted that In 2 S 3 /CNF-2 with the addition of CNF showed an increased photocatalytic performance compared to In 2 S 3 (Figure 6(a)). The band gap energy ( ) of samples was calculated by Tauc's equation [32,33] and the result was shown in Figure 6(b); the values of In 2 S 3 and In 2 S 3 /CNF-2 in Figure 6(b) are approximately 2.70 and 3.08 eV. The band gap of In 2 S 3 /CNF-2 was higher than In 2 S 3 , which is close to the value of In 2 S 3 and In 2 S 3 /CNF reported in other literatures [10,34]. Thus, it is indicated that the as-prepared In 2 S 3 /CNF-2 heterojunction structures have the appropriate for photodegradation of organic pollutants under visible light irradiation. In order to detect the ability of photodegradation, different photocatalysts were used to photodegrade organic pollutant under visible light irradiation, then the samples of products were analyzed. The results are shown in Figures 7(a )H 2 3 3 E g = 3.08 eV )H 2 3 3 /CNF E g = 2.70 eV (b) Figure 6: UV-vis diffuse reflectance spectrum of In 2 S 3 , CNF, and In 2 S 3 /CNF-2 (a) and the direct band gap determination of In 2 S 3 and In 2 S 3 /CNF-2 (b). The tangent at this point corresponds to the smallest absorption wavelength. and 7(b); the UV-vis absorption spectra in Figure 7 show the characteristic absorptions of RB at 570 and 580 nm. Owing to the strong absorption ability of CNF, a certain amount of RB was attached to the CNF before irradiation. Furthermore, when the dissociation and adsorption reach equilibrium, the concentration change of Rhodamine B, which is degraded by In 2 S 3 and In 2 S 3 /CNF-2, is the same. The concentration of RB does not significantly change after irradiation as shown in Figure 7(a). The change in the concentration of RB in Figure 7(b) is significantly greater than Figure 7(a), which can be further confirmed through the change of solution color before and after degradation with different photocatalysts. The degradation curves of RB on pure In 2 S 3 , CNF, and In 2 S 3 /CNF-2 composites were shown in Figure 8(a). Obviously, the concentration of CNF almost has no change, indicating that pure CNF has no photocatalytic activity under visible light irradiation. The In 2 S 3 /CNF-2 composites have a better photocatalyic efficiency (78.2%) for RB after visible light irradiation for 60 min than that of pure In 2 S 3 . It is concluded that In 2 S 3 /CNF-2 composites exhibited much higher photocatalytic efficiency compared with the pure In 2 S 3 . For a better comparison of the photocatalytic efficiency of In 2 S 3 /CNF-2 and pure In 2 S 3 , the kinetic analysis of degradation of RB was explored to confirm it. The above degradation reactions followed a Langmuir-Hinshelwood apparent first-order kinetics model [32,35] when the initial concentrations of the reactants are less than 100 ppm. The Langmuir-Hinshelwood apparent first-order kinetics model is described below: where app is the apparent first-order rate constant (min −1 ). The determined app values for degradation of RB with different catalysts are presented in Figure 8(b). It is clear that the as-prepared In 2 S 3 /CNF-2 composites show the highest reaction rate among the two catalysts with app = 0.0232 min −1 , while app = 0.0169 min −1 for pure In 2 S 3 . The photocatalytic reactivity order is well consistent with the activity studies above. It is reasonable to presume that the photogenerated electrons (e − ) transfer from In 2 S 3 to CNF in the In 2 S 3 /CNF system under visible light irradiation. Therefore, the photogenerated electrons first transfer to CNF and then are trapped by O 2 and H 2 O at the surface of photocatalyst or solution to form the active species such as O 2 − . These active species could help the degradation of RB dye. At the same time, the photogenerated holes (h + ) could react with H 2 O to form • OH, hydrogen ions (H + ), and then oxidize RB dye directly [12]. The complete photodegradation process can be summarized by the following reaction steps: Conclusion In summary, an effective method of preparing In 2 S 3 /CNF photocatalysts was described in this paper. The incorporation of CNF serving as electron collectors realizes a more effective separation of photogenerated electron-hole pairs and greatly boosts the photocatalytic activity of the products compared with the pure In 2 S 3 . The In 2 S 3 /CNF-2 composites show strong adsorption ability towards the RB, they can degrade 50 ppm of RB in 60 minutes under visible lights, and the excellent degradation RB activities of In 2 S 3 /CNF are mainly attributed to the large amount of effectively reactive species like h + and O 2 − . Overall, this study provides a new option to construct the semiconductor/CNF composites with high photocatalytic activity, environmental remediation, and energy conversion. Conflicts of Interest The authors declare that they have no conflicts of interest.
2018-04-03T00:39:13.638Z
2017-12-20T00:00:00.000
{ "year": 2017, "sha1": "e80e7d044df3b65010d1b49ae47d8f64f3bb63b3", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/scanning/2017/6513903.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4c855d103a61e01e7ed4cb945675e2a03c5ff1ef", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
221037248
pes2o/s2orc
v3-fos-license
Efficacy of noninvasive respiratory support modes for primary respiratory support in preterm neonates with respiratory distress syndrome: Systematic review and network meta‐analysis To compare the efficacy of different noninvasive respiratory support (NRS) modes for primary respiratory support of preterm infants with respiratory distress syndrome (RDS). | INTRODUCTION The introduction of surfactant had a major impact on improving the outcomes of preterm neonates with respiratory distress syndrome (RDS). 1 There was a major shift in the practice of surfactant therapy in the last decade with studies showing better outcomes with early selective rescue treatment when compared to the previously practiced prophylactic administration. 2 Stabilising neonates with RDS on a noninvasive respiratory support (NRS) such as continuous positive airway pressure (CPAP) and then instituting surfactant therapy in selective neonates who have an increased oxygen requirement has become the standard practice. 3 Newer modalities of NRS strategies that have come into practice in neonatal medicine in the past two decades, include heated and humidified high flow cannula (HFNC), noninvasive positive pressure ventilation (NIPPV), bilevel CPAP (Bi-PAP) as well as nasal high-frequency oscillation ventilation (nHFOV). 4,5 Several systematic reviews compared different NRS strategies in pair-wise meta-analysis, however, only one network meta-analysis (NMA) evaluated different NRS strategies in preterm neonates with RDS. [6][7][8][9][10] The NMA by Isamaya et al 10 and mechanical ventilation [MV] following surfactant) along with CPAP and NIPPV. 10 In this systematic review, we critically review the different modes of NRS and compare their effects in an NMA. | METHODS The efficacy and safety of four NRS modalities used as primary respiratory support in preterm neonates with RDS were compared: HFNC, CPAP, BiPAP, and NIPPV. The systematic review protocol was registered with PROSPERO (CRD42020177474). 11 The reporting of this review is consistent with the PRISMA for network metaanalyses guidelines. 12 | Outcomes The primary outcomes were: (a) requirement of invasive MV within the first 7 days of randomization; (b) treatment failure, defined as requirement of an additional form of respiratory support for various reasons such as respiratory acidosis, hypoxemia, or severe apnea within the first 7 days of randomization. The secondary outcomes included incidence of mortality (neonatal and before discharge), incidence of bronchopulmonary dysplasia (BPD) defined as oxygen requirement at 36 weeks of postmenstrual age, incidence of mortality or BPD, incidence of air leak, incidence of severe intraventricular haemorrhage defined as grade more than 2, 13 incidence of necrotising enterocolitis (NEC) stage ≥2, 14 incidence of patent ductus arteriosus requiring medical therapy or surgical intervention, incidence of severe retinopathy of prematurity defined as those requiring laser therapy and or intra-vitreal antivascular endothelial growth factor and/or stage ≥3 as per International committee for the classification of retinopathy of prematurity 15 and incidence of nasal injury. | 2941 the two review authors for further evaluation. In case of any conflicts, a third author's (KM) opinion was sought. | Assessment of risk of bias in included studies The Cochrane risk of bias tool version 1 was used to assess the risk of bias of the included studies by two review authors independently (VVR and KM). 16 The risk of bias was evaluated based on the following five domains-selection bias, performance bias, detection bias, attrition bias, reporting bias, and other bias. Any disagreement between the reviewers was resolved by discussion or consultation with the third author (PBH). | Data synthesis The characteristics of the included studies were tabulated and reviewed to exclude those studies that might result in intransitivity. NMA was done by the Bayesian approach using a random-effects model with Markov chain Monte Carlo simulation with vague priors (GEMTC, BUGSnet) using the R-software (Version-R 3.6.2). 17,18 Generalized linear models with four chains, burn-in of 50 000 iterations followed by 100 000 iterations with 10 000 adaptations were used. 18 The geometry of the networks was assessed using network plots with the size of the nodes being proportional to the number of subjects included in the intervention and the thickness of the arms connecting the different intervention nodes corresponding to the number of studies included in the comparison. Model convergence was assessed using Gelman-Rubin plots as well as by analyzing the trace and density plots. 19 Inconsistency was assessed by node-splitting. 20 Pair-wise meta-analysis evaluating the direct evidence for the different NIV modalities was also done and heterogeneity was assessed using I 2 statistic and Cochran Q test. The results of the NMA were expressed as risk ratios (RR) with 95% credible intervals (CrIs) in league matrix tables and forest plots. The league matrix tables display the RR of the outcome parameter for the intervention in the row vs that in the column in the lower triangle and vice versa in the upper triangle. The comparison of direct and indirect evidence using node-splitting are expressed as odds ratios with 95% CrIs. Surface under the cumulative ranking curve (SUCRA) was used to rank interventions for all the outcomes. SUCRA is an index with values from 0 (least effective intervention) to 1 (best intervention). 21 SUCRA should always be interpreted with 95% CrIs as well as the quality of the evidence. The confidence in the final estimates for all the outcomes was assessed using the GRADE approach as recommended by the GRADE working group. 22 | Meta-regression and sensitivity analysis The following sensitivity analyses were done-i. Excluding trials with high risk of bias ii. Excluding trials that had enrolled neonates who had already received surfactant prior to randomization iii. Assessing the interventions NIPPV and BiPAP based on synchronization-synchronized NIPPV (S-NIPPV), nonsynchronized NIPPV (NS-NIPPV), synchronized BiPAP (S-BiPAP) and nonsynchronized BiPAP (NS-BiPAP) Meta-regression was done using age as the covariate. The results of the meta-regression were illustrated with regression plots. The network estimates (expressed as RR [95% CrI]) for different gestational ages were depicted using forest plots. | RESULTS The electronic database search revealed a total of 9032 studies. After screening for suitability, 35 studies were included in the final synthesis. The PRISMA flow diagram is depicted in Figure 1. Thirty-three studies (3994 neonates) and thirty-two studies (3867 neonates) were analyzed for the primary outcomes of treatment failure and requirement of MV, respectively. The mean gestational age of the neonates was 31 weeks (E- Figure S1). Five studies had enrolled neonates who had already received surfactant before randomization. The time cuts-offs for treatment failure and MV were within the first 72 hours after randomization for most of the included studies. The NRS settings (initial and maximum) used in the included studies is given in E- Table S2. The characteristics of the included studies are given in Table 1. Some of the studies that were excluded for varying reasons are given in Table 2. 58-89 Meta-regression showed a trend similar to the outcome of MV (E- Figure S6). | Secondary outcomes The geometry and other characteristics of the networks for the different secondary outcomes are displayed in Figure 2 and E- Figure S7 and Table 3, respectively. The network assessing the outcomes mortality and NEC were inconsistent as assessed by node-splitting. The SUCRA plots for the secondary outcomes are given in the Figure 4 and E- Figure S8 | Air leak NIPPV was associated with lesser incidence of air leak when com- (Figure 3 and Table 4). | Mortality or BPD NIPPV was associated with a decreased risk of the combined outcome of BPD or mortality when compared to CPAP (0.74 [0.52, 0.98]) ( Figure 3 and Table 4). Figure S9 and Table 4). | Excluding studies with high risk of bias When studies with high risk of bias were excluded, there was no difference in efficacy between any of the NRS modalities for the outcome MV. The results were unchanged for other outcomes (E- Figures S10 and S11). | Excluding studies which had enrolled neonates who had already received surfactant The results were unchanged after excluding trials that had enrolled neonate who had already received surfactant before randomization (E- Figures S10 and S11). table in E-Table S3. NS-BiPAP, S-NIPPV, and NS-NIPPV had decreased incidence of treatment failure when compared to CPAP as well as HFNC. There were no statistically significant differences between NS- table in E-Table S3 ( Figure 5). | Quality of evidence The overall confidence in the NMA effect estimate for the primary outcomes of treatment failure and requirement of MV was moderate for HFNC vs CPAP; CPAP vs NIPPV; HFNC vs NIPPV comparisons and low to very low for other comparisons. The quality of evidence was very low to moderate for all other secondary outcomes for the different comparisons. The quality of evidence for all the comparisons across outcomes is given in Table 5. NIPPV being superior to CPAP in preventing treatment failure as well as MV. 9 The relative risk reduction for both the primary outcomes was much larger than that reported by Lemyre et al 9 with | DISCUSSION narrower CrIs. Reasons for this could be that this NMA had included more recently published studies and also that the modalities BiPAP and NIPPV were evaluated as separate interventions. Also, this was an NMA where apart from the direct synthesis, the indirect evidence also contributed toward the overall effect estimate. It is evident from the included studies that the peak inspiratory pressure and hence the mean airway pressure (MAP) that was delivered with NIPPV was much higher than the positive end-expiratory pressure generated with CPAP. [41][42][43][44][45][46][47][48][49][50][51][52][53][54] This might be one of the reasons for NIPPV being more effective than CPAP. The fact that the incidence of air leak, as well as that of the combined outcome of BPD or mortality, was much lesser with NIPPV when compared to CPAP might suggest that the use of a relatively higher MAP with NIPPV is not deleterious. The analysis of secondary outcomes reveals that both BiPAP and CPAP were associated with an increased risk of air leak and mortality when compared to NIPPV. Also, the risk of combined outcome of mortality or BPD was higher in CPAP compared with NIPPV. Isayama et al 10 in their NMA of different invasive and noninvasive modalities along with different methods of surfactant administration in preterm neonates with RDS had found no differences in the incidence of air leak, mortality, or BPD between CPAP and NIPPV. 10 The differences in the findings between this NMA and Isayama et al's 10 The increased risk of air leak with BiPAP when compared to NIPPV could be explained by the different mechanisms of flows used by these two interventions. 91 While NIPPV uses a fixed flow using a ventilator, BiPAP is a variable flow device. Some of the BiPAP studies have used very high pressure high of upto 15 cm H2O which might require a very high gas flow rate. 92 Also, the inspiratory times are typically higher in BiPAP compared to NIPPV which might result in the alveoli being exposed to higher pressures for a longer period of time as well as increasing the risk gas trapping, especially when higher respiratory rates are used. The risk of mortality or BPD was higher in CPAP compared with NIPPV in this NMA. This was not seen in the Isayama et al's 10 NMA. This might be due to the differences in the inclusion criteria between the two meta-analyses as specified above. Also, the quality of evidence for most of the secondary outcomes of this NMA was low to very low and hence should be interpreted with caution. | STRENGTHS AND LIMITATIONS This is one of the largest NMA evaluating the different NRS modalities used as primary support for preterm neonates with RDS. It is PRSIMA NMA extension compliant. The quality of evidence for all the outcomes was done in a very robust manner as per the GRADE working group recommendations. Limitations in this NMA include that this did not include two of the recently introduced NRS modalities in neonatal respiratory care namely, nasal high-frequency oscillation ventilation (nHFOV) and neurally adjusted Ventilatory assist. Also, two of the secondary outcomes (NEC and mortality) had inconsistent networks. Finally, the event rates and the optimal information size for most of the secondary outcomes were low with the quality of the evidence being downgraded to low to very low for these outcomes. | CONCLUSIONS The overall quality of evidence for the primary outcomes was moderate to very low for the different comparisons. NIPPV appears to be the most effective primary NRS modality in preterm neonates with RDS to prevent MV and respiratory failure in the first few days of life.
2020-06-11T09:02:21.344Z
2020-06-08T00:00:00.000
{ "year": 2020, "sha1": "44d0d9476117fba09c7ff3676e295a771425f9b1", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ppul.25011", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "a6bb5dfa8f1fd15f3049087eb22d84f925b602ee", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
79117995
pes2o/s2orc
v3-fos-license
Associated factors to urinary incontinence in women undergoing urodynamic testing * 2 Universidade Federal de Mato Grosso do Sul, Três Lagoas, Mato Grosso do Sul, Brazil. ABSTRACT Objective: Analyzing factors associated with urinary incontinence (UI) among women submitted to urodynamic testing. Method: A cross-sectional study of 150 women attended at a urological center. Data were analyzed using univariate and multivariate statistics. Results: White women (79.3%), overweight (45.3%), menopausal (53.3%), who drink coffee (82.7%), sedentary (65.3%), who had vaginal birth (51.4%), with episiotomy (80%), and who underwent the Kristeller maneuver (69%). 60.7% had Urethral Hypermobility (UH). A statistical association was found between: weight change and UH (p = 0.024); menopause, Intrinsic Sphincter Deficiency (ISD) and Detrusor Instability (DI) (p = 0.001); gynecological surgery, ISD and DI (p = 0.014); hysterectomy and all types of UI (p = 0.040); physical activity and mixed UI (p = 0.014). Conclusion: Interventions and guidance on preventing UI and strengthening pelvic muscles should be directed at women who present weight changes, who are sedentary menopausal women, and those who have undergone hysterectomy or other gynecological surgery. Studies on pelvic strengthening methods are needed in order to take into account the profile of the needs presented by women. INTRODUCTION Urinary incontinence (UI) is defined by the International Continence Society (ICS) as any complaint of urine loss, regardless of the degree of social or hygienic discomfort it causes, and affects 14% to 57% of women aged between 20 and 89 years (1)(2)(3)(4) .Its higher prevalence in women stems from the lower length of the urethra, the anatomy of the pelvic floor, pregnancy and delivery, overcoming hormonal changes throughout their life cycles after ovarian follicle depletion and progressive hypoestrogenism (5)(6) .In general, the main risk factors for UI are related to sociodemographic aspects, clinical history of certain diseases, gynecological and obstetric factors, as well as life habits; especially smoking, caffeine consumption and sedentary lifestyle or intense physical activity (5)(6) . The negative impacts of UI on women stand out in their reports such as: discomfort and embarrassment of losing urine with minimal effort, frequent trips to the bathroom, being wet and ashamed of a urine odor for stretches of time, losing urine on the way to the toilet, restricted time being away from home, having to control fluid intake, as well as family and social relationship problems.By affecting all aspects of their quality of life, such problems generate fear, shame, embarrassment and humiliation, along with physical, emotional, psychological and social consequences (1,(3)(4) . UI is classified into: Stress urinary incontinence (SUI); Urethral hypermobility (UH); Intrinsic Sphincter Deficiency (ISD); by detrusor hyperactivity or Detrusor muscle instability (DI).In the focus of UI, Urodynamic testing (UDT) is a widely used diagnostic technique in Brazil, done in association with surveying patient's data on circumstances, frequency and severity of urine loss (7) .However, this technique is questioned due to its cost, which impedes it being carried out on a larger scale due to the discomfort and embarrassment of those who are submitted to it, and the fact that it often does not show the reported symptoms such as in cases of overactive bladders (8) . Several studies have been carried out in Brazil regarding UI in women, however studies that address the association between types of UI and their subclassifications are still scarce, which would allow for obtaining subsidies to design diagnostic and treatment measures which can minimize or prevent the presented symptoms occurring, resulting in a better quality of life for women. Thus, the objective of this study is to analyze the sociodemographic, health, life habits, gynecological antecedents and obstetric factors associated with urinary incontinence among women undergoing urodynamic testing. METHOD This is a cross-sectional quantitative research approved by the Research Ethics Committee of the São José do Rio Preto School of Medicine (FAMERP) under number 303.015, conducted with women with UI treated at a Urological Diagnosis and Treatment Center in the city of São José do Rio Preto.This center attends private patients, those with supplementary health insurance and the Brazilian Unified Health System (Sistema Único de Saúde -SUS), with an average of 100 urodynamic tests conducted on women per month. The sample consisted of 150 women with UI undergoing urodynamic testing, selected by non-probabilistic convenience sample, and including the first 30 women who performed UDT every month between May and September 2013.Inclusion criteria adopted were: being a woman over the age of 18, not being in the gestational and puerperal periods, not having cognitive deficit and accepting to participate in the study after orientation on the study and signing of the Informed Consent Form.Those under 18 years of age or unable to read and respond to the questionnaire were excluded. Data were collected through primary and secondary data sources.Participants responded to an instrument adapted from Higa (5) , which included variables related to gynecological history, as well as sociodemographic, obstetric, health data and lifestyle habits.UI classification was performed based on the findings of the urodynamic testing medical report, collected through a review of the participants' medical charts. Table 2 shows a statistically significant association between the participants' type of UI and weight change, menopause, hysterectomy and physical activity.An association between ISD, DI and mixed UIs was found in postmenopausal women; those who underwent gynecological surgery had ISD and DI types of UI; while those who presented weight change and those who underwent hysterectomy had an association of all types of evaluated UI.Sedentary women had more ISD, UH and DI types of UI, and those who performed physical activity presented more mixed UI.Women with DI type UI were significantly older than those with UH type UI (Table 3). Figure 1 shows that women with UH type UI were up to 49 years of age, had gained weight in the last 10 years, consumed coffee and had a smoking habit.On the other hand, those with mixed type UI were associated with those in one or more of the following factors: being over 50 years old, overweight, hypertensive, diuretic, diabetic, menopausal and performing some physical activity.Among women with Urinary Incontinence type DI or ISD, disorders such as neurological disease, a cough, constipation and previous gynecological surgery such as hysterectomy, perineoplasty or sling (surgery) have been reported.Table 4 shows descriptions of the obstetric variables and the UI type among the 140 women in the study, showing: mean of 2.8 gestations (SD: 1.5, minimum 1 and maximum weighed an average of 3,480g (SD: 535g: minimum 1,980 and maximum 5,300g); 48.6% had an average of 2 cesarean delivery (SD: 0.7, minimum 1 and maximum 3 deliveries); 40 (28.6%) had performed an average of 2.5 vaginal delivery (SD: 1.5, minimum 1 and maximum 8 deliveries), and 32 (22.9%) had experienced both types of delivery; 50 (69.4%)reported that they had undergone Kristeller's maneuver; 58 (80.6%) had an episiotomy, and 57 (79.2%) did not use oxytocin.Regarding the type of delivery and UI, we found: ISD (15 -10.7%) with 53.3% cesarean delivery; UH (88 -62.9%);DI type (29 -20.7%) with 62.1% vaginal delivery, and mixed UI (8 -8.7%) with 75% vaginal delivery.*Only the 72 women who had vaginal delivery were considered for these variables. DISCUSSION Studying the loss of urinary continence and associated factors is not only important because it represents a serious public health problem, but also because of the magnitude of suffering it causes to people affected in the physical, psychological and social spheres (1,9) . The age of women affected by UI is revealed in several studies, showing that UI affects women across a wide age group, especially after the age of 40, with increasing incidence over the course of aging, as also evidenced in this study.In older adults, the most common type of UI is SUI, with loss of urine when coughing and sneezing (9) ; followed by DI, characterized by loss of urine before reaching the bathroom (6)(7)(8)(9)(10)(11) .SUI affects 50% of North American women, mainly women from the younger age group (11)(12) . With regard to ethnicity, there is evidence of higher UI prevalence in white women over the different age groups (13) .Although a higher proportion of UI in white women was also identified in the present study, there is no sufficient statistical evidence to support this finding, suggesting that such an outcome may have been influenced by the predominance of the white race in the region studied. Weight gain is associated with UI, which was also verified in this study, where body mass index (BMI) was determinant of incidence and persistence of UI (5,(14)(15) .Being overweight among older women contributes to the increase in UI symptoms, evidencing that regular physical exercise is a protective factor against UI, as it prevents obesity (12) .In this study, we observed that the majority (of the participants) did not practice physical activity, which may justify the association between all types of UI studied and women who reported gaining weight.Studies highlight the importance of physical activity for health, as well as for UI improvement, showing a higher frequency of urinary losses among less active older women (7) .On the other hand, practicing rigorous highimpact physical exercise may be a predisposing factor for the development of UI in young and nulliparous women due to the increase in intra-abdominal pressure, as evidenced in women practicing jump classes (16) .Although the present study did not measure the strength and the impact of physical exercises performed by the analyzed women, they showed a higher frequency of mixed UI.Given the importance of physical exercise and its effectiveness in treating obesity and presented as an aspect associated with UI, the need for studies with adequate design to evaluate the appearance and severity of UI according to the physical exercise profile is apparent in order to obtain subsidies for implementing specific protective measures to the exercises practiced. Daily intake of coffee is cited as a factor for UI because caffeine can generate detrusor instability, which leads to loss of urine and a sense of voiding urgency (6) .In the present study, 82.7% of the women reported drinking coffee daily, with an average of 2.5 cups per day (SD: 2.3 cups/day and a median of 2 cups) and between 1.0 and 15.0 cups per day. Smoking is also associated with loss of urine as tobacco leads to estrogen deficiency and smoking causes frequent coughing, facts that can lead to UI (6) .Although this study found no association between UI and intestinal constipation, some studies show its association with SUI due to the injuries it can cause to the pelvic muscles from the force/ pressure performed during evacuation (6,17) . The association between UI and health problems is reported in several studies.The proportion of women with UI and hypertension found in this study corresponded to another study that showed the prevalence of UI in women who used diuretics (21.4%) (15) .A 2.5-fold higher association of UI among diabetics is also described, since hyperglycemia causes changes in the muscle and urethral extracellular matrix (18)(19) . Gynecological aspects associated with UI were: menopause, explained by the hormonal changes that affect the pelvic muscles (15,20) ; perineoplasty and sling, which although presenting protective treatments to the appearance of UI, were ineffective among the women in the study; and hysterectomy, referred to as UI risk, as it may cause damage to the structures supporting the bladder and urethra (5,11,21) . UI can be frequent in the gestational period due to the action of increasing uterine pressure and fetal weight on the pelvic floor muscles, and hormonal changes leading to a reduction in the strength of the urethral sphincter support function (20,22) .In our study, 21% of women reported UI during pregnancy.This is a lower proportion than that found in the literature which reveals that lifestyle may have had an influence and that protective measures against the appearance of UI may have been employed even before being pregnant (23)(24) .A relationship between UI in pregnancy and weight gain of pregnant women has been reported as a risk factor for pelvic floor muscle dysfunction (25) .A postpartum strategy for UI prevention is pelvic floor muscle training and postpartum weight loss (26) .It is also known that certain symptoms such as frequency, nocturia and urge incontinence (common in pregnancy) decrease significantly and tend to disappear in the postpartum period (25) .Parity has been reported as a risk factor for UI (25)(26)(27) , which was also demonstrated in this study. Regarding the history of abortion/miscarriage in women affected by UI, there are few studies that associate this occurrence with the appearance of UI.In the present study we found that the majority of women with UI did not have a history of abortion/miscarriage (22) . Regarding the type of delivery, it is common to assert that vaginal delivery confers a higher risk to the development of SUI in comparison to Cesarean childbirth, thus suggesting that traumas in the pelvic floor from vaginal delivery would represent a risk for development of UI.However, there is scientific evidence that indicates that well-conducted vaginal delivery is more beneficial to both mother and baby.The literature indicates equality in the prevalence of UI among women who had vaginal delivery and those with Cesarean section, reporting an even higher proportion of UI among women who only underwent Cesarean section (24)(25)(26)(27) .The frequency of UI in women who had vaginal delivery or Cesarean section in the present study was similar. It was not found that the use of forceps was determinant for UI, although it is mentioned that the use of forceps during vaginal delivery causes more vulvoperineal laceration associated with the appearance of UI, especially SUI.We emphasize that a misuse of forceps is associated to pelvic floor dysfunctions, leading to the appearance of UI (17,(28)(29)(30) . Epidural analgesia is cited as a risk factor for UI as it causes prolongation of the expulsive period, increasing the risk of pelvic floor injury.However, some authors consider that such anesthesia protects against UI by inducing relaxation of the pelvic floor musculature, preventing trauma during the expulsive period of vaginal delivery (5) .In this study, epidural was responsible for resolving the delivery for the majority of women. It is recognized that heavier infants cause damage to the pelvic floor muscles, reducing the strength of the sphincter support function, which may cause mobility of the urethra and lead to incompetence of the urethral sphincter and UI.In the present study, the majority of the women had babies weighing more than 3,000g (1,25,30) . Many complications can be reduced or prevented with appropriate obstetric care, but in Brazil, and especially in the region of São José do Rio Preto, obstetric care is exceedingly interventionist, technocratic, medicalized and hospitalcentric, resulting in owning the dishonorable title of being the world champion of caesarean sections.Common practices that undermine obstetrical care include inadequate use of oxytocin, lithotomy position at delivery, the Kristeller's maneuver, and inadequate labor guidance procedures (30) . The cooperation of women who provided detailed information on the variables important for this study and the access to participants' medical records can be mentioned as facilities for conducting this research.As difficulties, we can point out the longer time spent in data collection due to Even so, we consider that this research can contribute to deepening knowledge about UI and its associated factors. CONCLUSION The profile of the women participating in this study suggests that prevention and control of UI should be implemented through guidelines on the impact of lifestyle modifications, better control of health problems, and pelvic floor muscle strengthening practices. Health education practices to prevent the onset of UI should be targeted for all women and can not only be carried out in primary care services, but also in private and specialized institutions/clinics. Several studies have been carried out in Brazil regarding UI in women, however studies that address the association between types of UI and their subclassifications are still scarce.These would allow for obtaining support in designing diagnostic and treatment measures that can minimize or prevent the appearance of presented symptoms. Data obtained in this study can provide better professional performance, especially for nurses in establishing care protocols that facilitate diagnosis, and intervention measures for prevention, treatment and control.The investigation field of UI (in women) is wide and necessary for leading to proposals and interventions which prevent and control UI disorders and improve the quality of life of affected women. Figure 1 - Figure 1 -Factorial analysis of the urinary incontinence types and sociodemographic, obstetric, health and lifestyle variables -São José do Rio Preto, SP, Brazil, May to Sept. 2013. Table 1 - Variables of sociodemographic characterization, health and life habits of women submitted to urodynamic testing -São José do Rio Preto, SP, Brazil, May to Sept. 2013. Table 2 - Characterization of women submitted to urodynamic testing according to the type of Urinary Incontinence presented -São José do Rio Preto, SP, Brazil, May to Sept. 2013. Table 3 - Age description of the women submitted to urodynamic testing according to the type of Urinary Incontinence presented -São José do Rio Preto, SP, Brazil, May to Sept. 2013. 1P value for the Analysis of Variance test (ANOVA). Table 4 - Obstetric variables of women submitted to urodynamic testing according to the type of urinary incontinence presented -São José do Rio Preto, SP, Brazil, May to September, 2013.
2018-04-03T03:39:58.298Z
2017-04-03T00:00:00.000
{ "year": 2017, "sha1": "3e9c6c88d93bcf9fa71baf472947c97866d984a5", "oa_license": "CCBY", "oa_url": "http://www.scielo.br/pdf/reeusp/v51/1980-220X-reeusp-51-e03209.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "3e9c6c88d93bcf9fa71baf472947c97866d984a5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
218674134
pes2o/s2orc
v3-fos-license
TAO Conceptual Design Report: A Precision Measurement of the Reactor Antineutrino Spectrum with Sub-percent Energy Resolution The Taishan Antineutrino Observatory (TAO, also known as JUNO-TAO) is a satellite experiment of the Jiangmen Underground Neutrino Observatory (JUNO). A ton-level liquid scintillator detector will be placed at about 30 m from a core of the Taishan Nuclear Power Plant. The reactor antineutrino spectrum will be measured with sub-percent energy resolution, to provide a reference spectrum for future reactor neutrino experiments, and to provide a benchmark measurement to test nuclear databases. A spherical acrylic vessel containing 2.8 ton gadolinium-doped liquid scintillator will be viewed by 10 m^2 Silicon Photomultipliers (SiPMs) of>50% photon detection efficiency with almost full coverage. The photoelectron yield is about 4500 per MeV, an order higher than any existing large-scale liquid scintillator detectors. The detector operates at -50 degree C to lower the dark noise of SiPMs to an acceptable level. The detector will measure about 2000 reactor antineutrinos per day, and is designed to be well shielded from cosmogenic backgrounds and ambient radioactivities to have about 10% background-to-signal ratio. The experiment is expected to start operation in 2022. Executive Summary The Taishan Antineutrino Observatory (TAO, also known as JUNO-TAO) is a satellite experiment of the Jiangmen Underground Neutrino Observatory (JUNO) [1]. TAO consists of a ton-level liquid scintillator (LS) detector at ∼ 30 meters from a reactor core of the Taishan Nuclear Power Plant in Guangdong, China. About 4500 photoelectrons per MeV could be observed by instrumenting with almost full coverage (∼ 10 m 2 ) of Silicon Photomultipliers (SiPMs) of > 50% photon detection efficiency, resulting in an unprecedented energy resolution approaching to the limit of LS detectors. The detector operates at -50 • C to lower the dark noise of SiPM to an acceptable level. The TAO experiment is expected to start operation in 2022. The main purposes of the TAO experiment are 1) to provide a reference spectrum for JUNO, eliminating the possible model dependence due to fine structure in the reactor antineutrino spectrum in determining the neutrino mass ordering [2]; 2) to provide a benchmark measurement to test nuclear databases, by comparing the measurement with the predictions of the summation method; 3) to provide increased reliability in measured isotopic antineutrino yields due to a larger sampled range of fission fractions; 4) to provide an opportunity to improve nuclear physics knowledge of neutron-rich isotopes [3]; 5) to search for light sterile neutrinos with a mass scale around 1 eV; 6) to provide increased reliability and verification of the technology for reactor monitoring and safeguard. Figure 1: Schematic view of the TAO detector, which consists of a Central Detector (CD) and an outer shielding and veto system. The CD consists of 2.8 ton gadolinium-doped LS filled in a spherical acrylic vessel and viewed by 10 m 2 SiPMs, a spherical copper shell that supports the SiPMs, 3.45 ton buffer liquid, and a cylindrical stainless steel tank insulated with 20 cm thick Polyurethane (PU). The outer shielding includes 1.2 m thick water in the surrounding tanks, 1 m High Density Polyethylene (HDPE) on the top, and 10 cm lead at the bottom. The water tanks, instrumented with Photomultipliers (shown by red circles), and the Plastic Scintillator (PS) on the top comprise the active muon veto system. The dimensions are displayed in mm. The schematic drawing of the TAO detector is shown in Figure 1. The Central Detector (CD) detects reactor antineutrinos with 2.8 ton Gadolinium-doped LS (GdLS) contained in a spherical acrylic vessel of 1.8 m in inner diameter. To fully contain the energy deposition of gammas from the Inverse Beta Decay (IBD) positron annihilation, a 25-cm selection cut will be applied for positron vertex from the acrylic vessel, resulting in 1 ton fiducial mass. The IBD event rate in the fiducial volume will be about 2000 (4000) events per day with (without) the detection efficiency taken into account. SiPM tiles are installed on the inner surface of a spherical copper shell of 1.882 m in inner diameter. The gap between the SiPM surface and the acrylic vessel is about 2 cm. The copper shell is installed in a cylindric stainless steel tank of an outer diameter of 2.1 m and a height of 2.2 m. The stainless steel tank is filled with Linear Alkylbenzene (LAB), also the solvent of the GdLS, which serves as the buffer liquid to shield the radioactivity of the outer tank, to stabilize the temperature, and to couple optically the acrylic and the SiPM surfaces. The stainless steel tank is insulated with 20 cm thick Polyurethane (PU) to operate at -50 • C to reduce the dark noise of SiPMs to ∼ 100 Hz/mm 2 . The central detector is surrounded by 1.2 m thick water tanks on the sides and 1 m High Density Polyethylene (HDPE) on the top to shield the ambient radioactivity and cosmogenic neutrons. Cosmic muons will be detected by the water tanks with PMTs instrumented and by Plastic Scintillator (PS) on the top. Although 3%/ E[MeV] energy resolution (E is the visible energy) will be enough for TAO to serve as a reference detector of JUNO, the energy resolution should be as high as possible to study the fine structure of the reactor antineutrino spectrum and create a highly resolved benchmark to test nuclear databases. New findings in the measurement of the reactor antineutrino spectrum might be achieved with a state-of-the-art detector. A photoelectron yield of about 4500 photoelectrons per MeV are expected for TAO from simulations, corresponding to an energy resolution of 1.5%/ E[MeV] in photoelectron statistics. However, when approaching to the limit of the energy resolution of LS detectors, non-stochastic effects become prominent. At low energies, the contribution from the LS quenching effect might be quite large, although not very well understood thus model dependent. At high energies, the smearing from neutron recoil of IBD becomes dominant. Taking into account the projected dark noise, cross talk, and charge resolution of the SiPMs, the expected energy resolution of TAO is shown in Figure 2. The usual 1/ √ E behavior is not valid here. In most of the energy region of interest, the energy resolution of TAO will be sub-percent. Taishan Nuclear Power Plant locates in Chixi town of Taishan city in Guangdong province, 53 km from the JUNO experiment. It has two cores currently in operation. Another two cores might be built later. All reactors are European Pressurised Reactor (EPR) with 4.6 GW thermal power. At ∼ 30 meter baseline, the far core contributes about 1.5% to the total reactor antineutrino rate in the TAO detector. The Taishan Neutrino Laboratory for the TAO detector is in a basement at 9.6 m underground, outside of the concrete containment shell of the reactor core. Muon rate and cosmogenic neutron rate are measured to be 1/3 of those on the ground. Simulations show that the cosmogenic fast neutron background, accidental background, and cosmogenic 8 He/ 9 Li background can be well controlled to < 10% of the signal with proper shielding and muon veto. The expected rates of IBD signal and the residual backgrounds passing the IBD selection cuts (see details in Section 2) are summarized in Table 1. The detector R&D started in 2018. A GdLS recipe has been developed and showed good transparency and light yield at -50 • C. The SiPMs and the readout electronics have been preliminarily tested at the same temperature. A prototype detector is currently being tested at -50 • C. [24] and three convoluted energy spectra with respective energy resolutions of TAO, JUNO and Daya Bay. Ratio of the other three spectra to the Daya Bay convoluted spectrum shows the difference is about 2%. The TAO and JUNO spectra reproduces similar structures as the summation spectrum to less than 1%. The bin width is set to be 50 keV. . . . . . . . . . . . . . 23 1-5 The statistical uncertainty of TAO (JUNO) with three (six) year's data taking. . . 24 1-6 The neutrino mass ordering sensitivity of JUNO with the inputs in Ref. [2] as a function of the input bin-to-bin shape uncertainty. The ∆χ 2 represents the mass ordering determination sensitivity [2], defined as the ∆χ 2 is the standard χ 2 of fitting the expected data in the hypothesis of normal ordering (inverted ordering) to the simulated data. Several cases of using TAO or Daya Bay constraint uncertainties and the JUNO yellow book [2] result are shown as markers. The markers show the cases for different reference spectra as inputs for JUNO. . Physics Goals The three-neutrino oscillation framework has been well supported by the observations from solar neutrinos, atmospheric neutrinos, accelerator neutrinos and reactor antineutrinos. The neutrino mixing matrix, relating the mass eigenstates (ν 1 , ν 2 , ν 3 ) and the flavor eigenstates (ν e , ν µ , ν τ ), is commonly expressed as the Pontecorvo-Maki-Nakagawa-Sakata (PMNS) matrix [4][5][6]. The neutrino oscillations can be described by six parameters: three mixing angles, θ 12 , θ 13 , θ 23 , and two independent mass splittings, ∆m 2 21 ≡ m 2 2 − m 2 1 , ∆m 2 32 ≡ m 2 3 − m 2 2 (or ∆m 2 31 , where m 1 , m 2 , and m 3 are masses of the mass eigenstates), and one CP-violation phase δ CP . Today a very large set of oscillation results obtained with an amazing variety of experimental configurations and techniques can be interpreted in the three-neutrino framework. The mixing angles and the mass splittings have been measured with precision below 10% [7]. The unknown CP-violation phase and the neutrino mass ordering (i.e. the sign of |∆m 2 32 |) are the major goals of the next generation neutrino experiments. The neutrino mass ordering (NMO) has two possibilities: normal ordering (m 1 < m 2 < m 3 ) (NO) and inverted ordering (m 3 < m 1 < m 2 ) (IO). The mass ordering can be determined by reactor antineutrino experiments at medium baselines (a few tens of kilometers) via the interplay effects between the short-and long-wavelength oscillations [8]. The JUNO experiment aims to determine the neutrino mass ordering and to improve the uncertainties of three oscillation parameters to below 1% by a precise measurement of the reactor neutrino energy spectrum with an energy resolution of 3%/ E[MeV] [2]. In Figure 1-1 the two energy spectra corresponding to NO and IO are reported. In commercial reactors like the Taishan reactors, electron antineutrinos (ν e ) are generated from thousands of beta decay branches of the fission products from four major isotopes, 235 U, 238 U, 239 Pu and 241 Pu. When detecting reactor antineutrinos via IBD reaction, the expected antineutrino Editors: Stefano Mari (smari@os.uniroma3.it) and Liang Zhan (zhanl@ihep.ac.cn) Major contributors: Jun Cao, Jianrun Hu, Bedřich Roskovec energy spectrum in a detector at a given time t is calculated as where E ν is theν e energy, d is the detector index, r is the reactor index, N d is the number of free protons in detector target, d is the detection efficiency, L rd is the distance from detector d to reactor r, P ee (E ν , L rd ) is theν e survival probability, σ(E ν ) is the IBD cross section, and φ r (E ν , t) is the energy spectrum of antineutrinos from reactor r which can be calculated as where W r (t) is the thermal power of reactor r, e i is the mean energy released per fission for isotope i, f ir (t) is the fission fraction, s i (E ν ) is theν e energy spectrum per fission for each isotope. The impacts of the uncertainties of the thermal power and fission fractions on the antineutrino flux are expected to be at sub-percent level according to the uncertainty evaluation in Daya Bay experience [9]. The energy spectrum per fission for each isotope has been estimated in literatures [10][11][12][13][14][15][16][17][18] by two main approaches. One is the summation method [16][17][18] which sums all the antineutrino energy spectra corresponding to thousands of beta decay branches for about 1000 isotopes in the fission products, with information in nuclear databases. This method results in an overall 10%-20% energy dependent uncertainty in the energy spectrum due to inadequate decay information and lack of relevant uncertainties in nuclear structures and fission yields. The other method is the beta conversion method [10][11][12][13][14][15] which converts the measured β energy spectra from the individual fission isotopes 235 U, 239 Pu, and 241 Pu to the corresponding antineutrino energy spectra using a set of virtual beta spectra. The observed antineutrino yield per fission shows a deficit compared with the model predictions, namely, the reactor antineutrino anomaly [19]. The recent reactor antineutrino experiments, Daya Bay [20], RENO [21], Double Chooz [22], NEOS [23], and others confirmed the reactor antineutrino anomaly and observed a new discrepancy in the Huber-Mueller [11,12] model predictions. The observed antineutrino energy spectrum shows an excess around 5 MeV compared with the model predictions. Figure 1-2 shows the prompt energy spectrum compared with the model predictions at the Daya Bay experiment. The variation of the energy spectrum versus the fission fractions is also studied and two major components, 235 U and 239 Pu, are extracted and compared with model predictions. Those observations of the total energy spectrum and the extracted isotopic energy spectra at Daya Bay disagree with the model predictions. Although the Huber-Mueller model was used in these comparisons, summation models (e.g. Ref. [24]) show a similar deficit and bump. To reduce the impact from the uncertainties of reactor flux models, reactor antineutrino experiments often deploy near detectors to provide the reference spectrum. Daya Bay, RENO and Double Chooz experiments have shown the success of such relative measurements. A precise measurement of the reactor antineutrino spectrum with an energy resolution of 3%/ E[MeV] at JUNO provides the sensitivity on neutrino mass ordering when comparing it with the spectra predicted under the hypotheses of normal mass ordering and inverted mass ordering, respectively. The current reactor antineutrino experiments, such as Daya Bay, can provide a reference spectrum for JUNO to correct the problems of reactor flux anomaly and the 5-MeV bump. However, the energy resolution is not sufficient to constrain the fine structure spectrum (details in later sections). The TAO experiment will deliver a precise antineutrino energy spectrum measurement with sub-percent energy resolution in most of the energy region of interest, providing new and important data in addition to current reactor neutrino experiments. With this new data, TAO can achieve several physics goals: • Measurement of a high-resolution antineutrino energy spectrum, which serves as a benchmark to test nuclear databases, provides increased reliability in measured isotopic antineutrino yields, and gives an opportunity to improve nuclear physics knowledge of neutron-rich isotopes; • Providing the reference spectrum for JUNO to reduce the model dependence on the reactor antineutrino spectrum; • Searching for light sterile neutrinos with a mass scale around 1 eV; • Verification of the detector technology for reactor monitoring and safeguard applications. Details of the above physics goals will be described in the following sections. Fine structure measurement In a reactor, the antineutrino energy spectrum is composed of spectra from thousands of beta decay branches. For each individual decay branch, the Coulomb correction produces a sharp edge at the end point of the individual antineutrino spectrum. As a result, the antineutrino energy spectrum has fine structure due to the discontinuities at the edge of each decay branch. A demonstration of percent-level fine structure in a spectrum calculated with the summation method is given e.g. in Ref. [26]. The popular Huber-Mueller model does not show fine structure because it uses about 30 virtual beta spectra without detailed structure to convert the β spectra to antineutrino spectra. Figure 1-3 shows an example of the summation calculation of antineutrino spectra from many fission products in Ref. [27]. The cutoff at the edge of each decay branch is clearly visible. However, the exact shape, amplitude and uncertainty of the fine structure is determined by thousands of betadecay branches and thus it is hard to be quantified due to lack of information. The measurement of the fine structure at TAO experiment will provide a benchmark to test nuclear databases, by comparing the measurement with the predictions of the summation method. The measurement will also provide an opportunity to improve nuclear physics knowledge of neutron-rich isotopes [3] in reactors. Figure 1-3: Calculated antineutrino energy spectra from many fission products in a commercial reactor. Figure is taken from Ref. [27]. The Daya Bay neutrino experiment will eventually collect more than 5×10 6 reactor antineutrino signals, the largest-ever sample. This performance enables precise reactor antineutrino spectral shape measurement with sub-percent uncertainties around 3 MeV energy [25]. However, the energy resolution of 8%/ E[MeV] in Daya Bay [9] is not sufficient to measure the fine structure of the energy spectrum. Other reactor antineutrino experiments, such as RENO, Double Chooz, and NEOS are also limited by energy resolution (5% − 7%/ E[MeV]). TAO is designed to have a subpercent energy resolution, better than the JUNO experiment (3%/ E[MeV]). The impact of the energy resolution in the measurement of the fine structure is demonstrated by toy MC calculations. The authors of Ref. [24] provide a summation spectrum with sufficient energy bins to include the fine structure from the end points of each decay branches. The summation spectrum is convoluted with different energy resolutions, corresponding to the designed values of TAO, JUNO and the actual value of Daya Bay. Figure 1-4 shows the comparison of the summation spectrum and three convoluted energy spectra. As shown in the figure, TAO and JUNO can reproduce the summation spectrum better than 1% including fine structure, while Daya Bay is different at 2% level. Currently the summation method calculations have an uncertainty of about 10%-20% due to insufficient nuclear data. For the fine structure, no reliable calculations are available with well presented uncertainties. A precise measurement of the reactor antineutrino spectrum from the TAO experiment will provide a benchmark to validate the summation spectrum calculation. With three years of data taking, TAO will collect about two million antineutrino events. A statistical uncertainty below 1% in the energy range of 2.5-6 MeV can constrain the fine structure to better than 1%, providing a reference spectrum for JUNO with a bin width of about 30 keV [2]. [24] and three convoluted energy spectra with respective energy resolutions of TAO, JUNO and Daya Bay. Ratio of the other three spectra to the Daya Bay convoluted spectrum shows the difference is about 2%. The TAO and JUNO spectra reproduces similar structures as the summation spectrum to less than 1%. The bin width is set to be 50 keV. Reference spectrum for JUNO The energy resolution is essential for JUNO to distinguish the multiple oscillation pattern driven by ∆m 2 31 and ∆m 2 32 in the hypotheses of normal mass ordering or inverted mass ordering. The uncertainty of the fine structure in the antineutrino energy spectrum has impact on the sensitivity of mass ordering. Due to insufficient decay information and lack of uncertainties in the nuclear structures and fission yields in nuclear databases, the summation method has an uncertainty at the 10% level. Current predicted antineutrino spectrum from reactor flux models, including both summation method and conversion method, disagrees with the measured spectrum at Daya Bay experiment and other reactor antineutrino experiments. Thus, the current reactor flux models cannot provide a reliable reference spectrum including fine structure for JUNO as an input of the neutrino mass ordering identification. TAO will provide a precise reference spectrum for JUNO with sub-percent energy resolution, and the event rate will be 33 times higher than JUNO. With the input spectrum from TAO, the predicted antineutrino energy spectrum for JUNO without oscillations can be expressed as where S TAO (E ν ) is the reference antineutrino energy spectrum from TAO, ∆f i is the possible difference of fission fractions for four major isotopes, and S i (E ν ) is the antineutrino spectrum for each isotope. If TAO has the same components of reactor antineutrino flux as JUNO, it will be an ideal near detector for JUNO to cancel all the antineutrino shape uncertainty. However, since TAO detects mainly the antineutrinos produced by one of the Taishan reactor cores, it could measure a different flux with respect to the one seen by JUNO with possible different running time periods. JUNO mainly receives the reactor antineutrinos from two Taishan reactors and six Yangjiang reactors. Taishan and Yangjiang reactors are different types of reactors, with 4.6 GW and 2.9 GW thermal power respectively. The difference of the fission fractions for four major isotopes, 235 U, 238 U, 239 Pu and 241 Pu are considered in the term related to ∆f i in Eq. 1.3. When using TAO as reference spectrum for JUNO, the statistical uncertainty of TAO will be propagated to JUNO as an input of the bin-to-bin spectral shape uncertainty. Figure 1-5 shows the statistical uncertainty of TAO with three years of data taking, and the statistical uncertainty of JUNO with six years of data taking. The expected antineutrino event sample at TAO is nearly 20 times of JUNO. It shows the statistical uncertainty of TAO is better than 1% in most of the energy region of interest. In Ref. [2], a 1% bin-to-bin spectral shape uncertainty is assumed for JUNO. The bin-to-bin uncertainty could be as large as 10% based on the input of the uncertainty of the summation spectrum. Even with the constraint from the spectrum measurement of the Daya Bay experiment, the bin-to-bin uncertainty is at the level of 2% as indicated in Figure 1-4 due to insufficient energy resolution of Daya Bay experiment. With the constraint from TAO experiment, the bin-to-bin uncertainty can be reduced to below 1% level. With the assumption of 10% difference on fission fractions for TAO and JUNO, the bin-to-bin uncertainty from the reference spectrum is about 1%. Figure 1-6 shows the mass ordering sensitivity of JUNO as a function of the input bin-to-bin shape uncertainty. The markers show the cases for different reference spectra as inputs for JUNO. The mass ordering sensitivity (∆χ 2 ) is improved with the input of TAO by ∼1.5 compared with the case using the Daya Bay reference spectrum, and is slightly better than the result in Ref. [2] with assumption of 1% bin-to-bin spectral shape uncertainty. Another method to use TAO reference spectrum in JUNO instead of using Eq. 1.3 is to perform a combined analysis of TAO and JUNO spectra. In this method, the correlation coefficients between TAO and JUNO data is crucial. The constraint from the TAO spectrum is naturally implemented in the combined analysis. A preliminary result of combined analysis obtains consistent results as shown in Figure 1-6 Search for light sterile neutrinos The majority of experimental data using accelerator, atmospheric, reactor and solar neutrinos can be explained by nowadays well-established three-flavor neutrino mixing, parameterized by the [2] as a function of the input bin-to-bin shape uncertainty. The ∆χ 2 represents the mass ordering determination sensitivity [2], defined as the is the standard χ 2 of fitting the expected data in the hypothesis of normal ordering (inverted ordering) to the simulated data. Several cases of using TAO or Daya Bay constraint uncertainties and the JUNO yellow book [2] result are shown as markers. The markers show the cases for different reference spectra as inputs for JUNO. PMNS matrix [4][5][6]. However, some observed phenomena are in tension with this three-flavor paradigm, when attempting to explain them by neutrino oscillations, which are a natural consequence of neutrino mixing. Those are: so-called Reactor Antineutrino Anomaly (RAA) [19], an observed reactorν e deficit with respect to the state-of-the-art prediction models; anomalousν e appearance in theν µ beam at the LSND [28] and MiniBooNE [29,30] experiments; and deficit in number of ν e 's from radioactive calibration source in gallium experiments [31]. All those can be accommodated if we extend our model by an additional fourth neutrino with a mass splitting of approximate 1 eV 2 . Corresponding flavor state would not participate in the weak interactions, since there are only three light active neutrinos [32], and thus it is called 'sterile'. Nevertheless, it can still mix with the active ones and demonstrates its presence via neutrino oscillations. We investigate the TAO sterile neutrino sensitivity in the framework of a 3+1 model, which contains an additional sterile state and corresponding new mass state on top of the three known flavor and mass states. Taking into account TAO's short oscillation baseline, the oscillation probability forν e disappearance can be approximated as: where mass splittings are defined as ∆m 2 ij ≡ m 2 i − m 2 j , m i being the mass of the i-th neutrino state, and |U ei | are elements of the extended 4 × 4 unitary mixing matrix. Using the parametrization of Ref. [33], U ei can be expressed in terms of the neutrino mixing angles θ 12 , θ 13 and θ 14 . We assume a simple geometry of a cylindrical reactor with height of 3 m and radius of 2 m, where antineutrinos are produced uniformly in this volume. The TAO detector is spherical. Its center is placed at 10 m below the reactor center and 30 m far on the horizontal level to match the possible location of the experimental hall. Due to the proximity of the reactor and detector, both cannot be treated as point-like in this study and their dimensions are thus taken into account in the oscillation probability calculation. The reactor antineutrinos are detected via IBD reaction on free protons. As a nominal setting, we assume 3 years of data taking with 80% reactor time on and 50% IBD detection efficiency. We use Huber-Mueller model [11,12] to calculate the antineutrino spectra, however, with an inflated bin-to-bin uncorrelated shape uncertainty of 5% (for 50 keV bin width), which is set to be a conservative estimate to the one in Ref. [9]. The choice of the default spectrum shape has a negligible impact on the sensitivity. We take into account major sources of background: accidental coincidences, decays of unstable muon spallation products, i.e. 9 Li and 8 He decays, and fast neutrons. Rates and spectrum shapes are determined from the TAO simulation (see Section 2). We assume uniformly distributed background when further dividing the fiducial volume into virtual segments. In order to quantify the difference of expected spectra with the prediction, we define the χ 2 as: where M ij is the expected number of events, P ij is the prediction for i-th virtual segment of the fiducial volume and j-th energy bin. The prediction is given as: where R ij is the reactor antineutrino spectrum, function of sterile oscillation parameters sin 2 2θ 14 and ∆m 2 41 . A ij , L ij and F ij are the accidentals, 9 Li/ 8 He and fast neutron backgrounds respectively for i-th segment and j-th energy bin. Each of the components has a corresponding rate nuisance parameter α and a relative uncertainty of 0.1%, 10% and 10% for the backgrounds respectively. The antineutrino rate nuisance parameter is unconstrained. The last term in Eq. 1.6 represents the spectra shape uncertainty and is defined as: with corresponding nuisance parameters α j fully correlated among segments and energy bin-tobin uncorrelated. We use a 5%, 10% and 3% bin-to-bin uncorrelated relative shape uncertainty (for 50 keV bin width) for reactor antineutrinos, 9 Li/ 8 He and fast neutrons, respectively. The accidentals spectrum is assumed to be known without uncertainty. We minimize over all nuisance parameters in Eq. 1.5. We use the CL s statistical method [34,35] to determine TAO sterile neutrino sensitivity, where we assume measured data to follow the classical three-neutrino model. The CL s method compares two hypotheses, in our case classical (3ν) and alternative sterile neutrino (4ν) scenarios. In order to further reduce the computational demands, we employ the so-called Gaussian CL s method [36], which approximates parent distributions with normal ones. As a nominal setting, we assume 3 years of data taking. Increased statistics improves the sensitivity only a little. We assume conservative 5% (for 50 keV bin width) relative bin-to-bin uncorrelated reactor antineutrino shape uncertainty. This is a major systematic uncertainty and its improvement will result in stringent sterile neutrino limits. We can achieve two times better limits with a 2% uncertainty. We use four virtual segments of the TAO detector, which will improve the sensitivity for ∆m 2 41 0.3 eV 2 utilizing their relative comparison. The search for sterile neutrinos via reactor antineutrino oscillations is in the scope of several experiments. The Daya Bay experiment used eight detectors placed at the baselines 300 m to set the most stringent limit on the sterile neutrino mixing for ∆m 2 41 ≤ 0.2 eV 2 [37]. Experiments such as PROSPECT [38], STEREO [39], DANSS [40], look for the oscillation signature at very short baselines ∼10 m covering large values of ∆m 2 41 from ∼0.2 eV 2 to ∼20 eV 2 . The intermediate distances of ∼30 m were explored by Bugey-3 [41] and NEOS [23] experiments, covering ∆m 2 41 from approximately 3×10 −2 eV 2 to about 5 eV 2 . The TAO sterile neutrino sensitivity to the new mixing angle θ 14 as a function of new mass splitting ∆m 2 41 is shown together with a representative experiment of each baseline range in Figure 1-7. TAO is complementary to Daya Bay and those very short baseline experiments, demonstrated as an expected sensitivity of the PROSPECT phase-I [42] while it is competitive and eventually leading experiments at ∼30 m distances, here represented by NEOS. Bugey-3 has a similar sensitivity. Furthermore, the TAO experiment is likely to set the best sterile neutrino limits around ∆m 2 41 = 0.5 eV 2 with a future improvement of the reactor antineutrino spectrum uncertainty expected from Daya Bay. Reactor monitoring and safeguard Antineutrino detectors have proven the ability to monitor in real time the nuclear reactor power [43,44] and in the longer time scales the fuel composition [45]. This provided a complementary way of reactor monitoring with respect to the standard methods. Moreover, such capability offers an interesting tool as a safeguard against undeclared and/or independent verification of the declared reactor power and fissile inventory. The effort of developing such a monitoring tool promoted by the International Atomic Energy Agency (IAEA) is ongoing across the globe. TAO is an ideal detector to greatly contribute to this effort. More than 99.7% of antineutrinos from a typical nuclear reactor come from decays of fission daughters of four major isotopes: 235 U, 238 U, 239 Pu and 241 Pu. The number of emitted neutrinos is in the first approximation proportional to the reactor power and enables the real time reactor power monitoring. However, in more detail, antineutrino flux and energy spectrum change with nuclear fuel composition evolution as 235 U in the reactor fuel is consumed and 239 Pu and 241 Pu are produced during the operation of a commercial reactor. Figure 1-8 shows an example of evolution of the fission fractions, relative contributions of each isotope to the total number of fissions, for the four major isotopes during a running cycle of one Daya Bay reactor [45]. The cycle between nuclear fuel replacement is usually few months long. Naturally breed plutonium could be nonetheless subject of interest for military purposes, namely building nuclear weapons. To prevent such a proliferation, IAEA representatives would like to monitor the reactors operation and the fissile inventory. However, checks and sharing the operation information is not always available or inspections might not be infallible since it is not performed constantly. Neutrino detectors could provide such a missing information or independently verify their truthfulness, from reactor operation activity on daily bases, see e.g. [43,44] to the fissile inventory in case of undeclared refueling and/or fuel processing, see e.g. [46,47]. The main aim of the safeguard is to determine the amount of plutonium produced in the reactor and reveal its eventual removal by fuel reprocessing. This, as well as reactor monitoring in general, can be done from the overall neutrino flux and/or antineutrino energy spectrum measurements. Each of the four isotopes has a unique antineutrino yield and produces a unique energy spectrum. The observed neutrino flux and spectrum are linear combinations of four isotopes with contributions proportional to their fission fractions. The change of the fuel composition with burn-up leads to the neutrino flux and spectrum evolution as it was demonstrated e.g. in [45]. Measuring these quantities with suitable detectors will allow to monitor the reactor performance and determine the amount of plutonium produced. The necessary input for such an analysis is of course knowing precisely the isotopic antineutrino yield and energy spectrum. Their accurate measurement is in demand since recent antineutrino experiments revealed discrepancy from theoretical predictions [20][21][22][23]48]. TAO will bring significant improvement in the precision of both flux and spectrum measurements. Based on the variation of the reactor antineutrino spectrum as a function of the fission fractions, the individual isotope spectra can be extracted in the experiment and later used by other experiments as an input. Using the same method performed by the Daya Bay experiment [25], which extracted the spectra for individual isotopes from the commercial reactor for the first time, antineutrino spectra can be acquired from the TAO data as well. The expected relative uncertainty of the extracted 235 U and 239 Pu antineutrino spectra for three years of data taking is shown in Figure 1-9. The uncertainty for TAO is smaller than in the Daya Bay result due to the advantage of monitoring a single reactor as opposed to six, thus having among others a larger fission fraction variation. TAO will also provide fine structure shape due to its superb energy resolution <2% at 1 MeV. TAO will join the global effort towards the nuclear reactor monitoring using reactor antineutrinos. Among current and proposed experiments, it is envisaged to provide the most precise measurement of the 235 U and 239 Pu antineutrino spectra from commercial reactors. In addition, spectra will be measured with unprecedented fine structure resolution. The TAO measurement can serve as an input for other reactor monitoring and safeguard studies. The TAO detector The reactor antineutrinos detected by JUNO come mainly from six Yangjiang reactors and two Taishan reactors. Six Yangjiang reactors have a full thermal power of 2.9 GW each. Currently, the Taishan nuclear power plant has two reactors with a full thermal power of 4.6 GW each. A candidate location for the TAO experiment is in a basement about 30 m from the center of one Taishan reactor core. With an overburden of several meters-water-equivalent, the measured cosmic muon flux is one third of that at the ground level. The detector design will be described in detail in Section 3 and Section 5. Figure 1 shows a sketch of the TAO detector. It consists of a central detector (CD), including a cryostat in order to keep the operating temperature at -50 • C, a water Cherenkov detector, and a passive shield. The central detector is a liquid scintillator (LS) detector with a spherical acrylic vessel in a diameter of 1.8 m to contain the LS. A preliminary fiducial volume cut rejects the outer 0.25 m layer of LS, yielding 1 ton fiducial mass in a radius of 0.65 m. The LS mixture is based on Linear Alkylbenzene (LAB) because of its excellent transparency, high flash point, low chemical reactivity, and good light yield. The LS is loaded with 0.1% gadolinium to reject effectively the accidental backgrounds with a delayed IBD neutron capture signal of ∼ 8 MeV, much higher than the natural radioactivity backgrounds. The liquid scintillator also consists of 2 g/L 2,5diphenyloxazole (PPO) as the fluor and 1 mg/L p-bis-(o-methylstyryl)-benzene (bis-MSB) as the wavelength shifter. A small amount of ethanol (0.1%) is added in order to maintain the optical properties of the mixture at low temperature. The density of the GdLS is 0.916 g/ml at -50 • C. The light yield is about 12000 photons per MeV. The liquid scintillator is contained in an acrylic vessel, which is submerged in a liquid buffer in a cylindrical cryostat with a radius of about 2 m, which preserves the temperature. The cryostat is filled with non-scintillating LAB as a liquid buffer in order to maintain good thermal performance. The scintillation light produced in the LS is detected by about 4100 Silicon Photomultiplier (SiPM) tiles with a total area of ∼ 10 m 2 , as described in Section 6, which ensure a high photo-detection efficiency and > 95% photo-coverage. To reduce the dark noise to a manageable level, the SiPM tiles need be cooled down to a low temperature (-50 • C). A copper sphere encloses the acrylic vessel providing the mechanical support to keep the SiPM tiles pointing to the center of the detector. The outer surface of the copper sphere is used to instrument the readout electronics and support the cooling pipes. The central detector is surrounded by water tanks to shield environmental radioactivity from the rock and air. The water tanks are equipped with 3-inch Photomultiplier Tubes (PMTs) to detect the Cherenkov light from cosmic muons, acting as a veto detector, with an expected efficiency of > 90%. In addition to the water tanks, layers of High Density Polyethylene (HDPE) are placed on top of the TAO detector in order to provide a passive shield, mainly against the neutrons produced by the cosmic muons and the radioactivity from the materials outside the detector. The HDPE shielding is covered by a plastic scintillator layer on top for tagging the cosmic muons. Lead bricks are laid at the bottom of the detector, acting as a passive shield against the radioactivity. Signal Reactor antineutrinos (ν e ) are generated from the fission products of four major isotopes, 235 U, 238 U, 239 Pu and 241 Pu. Theν e energy spectrum is measured via the inverse β-decay (IBD) reaction, ν e + p → e + + n, in the gadolinium-doped liquid scintillator. The IBD cross section increases steadily for energies above its 1.8 MeV threshold. The antineutrino spectrum from a commercial reactor decreases with increasing energy, therefore the resulting IBD spectrum has a bell shape with the maximum around 3.5-4.0 MeV, as reported in Figure 2-1 from Ref. [9]. The coincidence of the prompt scintillation generated by the e + with the delayed neutron capture on Gd provides a distinctiveν e signature. The IBD neutrons are predominantly captured by hydrogen emitting one 2.2 MeV gamma or by gadolinium emitting several gammas with a total energy of about 8 MeV. The average capture time is about 30 µs with 0.1% loaded Gd by mass. The gammas produced by the gadolinium capture are clear signatures of the IBD events above the energy of natural radioactivities. The antineutrino energy Eν e is correlated to the detected prompt energy from the positron, E e + , as Eν e ≈ E e + + (m n − m p − m e ). The kinetic energy of the outgoing neutron is less than tens of keV, which can be ignored in a first-order approximation. The energy deposited by the positron in the scintillator converts to light, and the energy resolution is in a first-order approximation determined by the photon counting statistics. The light yield of the LS is larger than 12000 photons per MeV, and about 4500 photoelectrons can be collected, corresponding to an energy resolution of 1.5%/ E[MeV]. TAO is designed to provide a photon detection efficiency of ∼ 50%. This requirement can be satisfied by using the SiPMs as the photosensors. The expected antineutrino energy spectrum in the TAO detector ignoring the neutrino oscillation is expressed as where E ν is theν e energy, N p is the target proton number, is the detection efficiency, L is the distance from detector to the reactor, σ(E ν ) is the IBD cross section, and φ(E ν ) is the reactor antineutrino flux integrated over time. The N p is about 7.2 × 10 28 per ton of LS assuming a 12% hydrogen mass fraction. The baseline is about 30 m. The reactor antineutrino flux from the Taishan reactor core with 4.6 GW thermal power is calculated with a nominal fission fraction of 0.561, 0.076, 0.307 and 0.056 for 235 U, 238 U, 239 Pu and 241 Pu, respectively. The IBD signals are selected based on the tagging of the ∼ 8 MeV signal of neutron capture on Gd as the delayed signal. The H capture signal with a lower energy of 2.2 MeV is not considered in the IBD selection, otherwise the background rate will increase by about one order of magnitude. The overall detection efficiency with a preliminary set of cuts is about 50% in the 1 ton fiducial volume from the Geant4 simulation. The IBD detection efficiency can be broken down to: 1. The IBD neutrons are mostly captured by Gd and H. The Gd capture fraction is 87%, which is the main detection channel. The H capture fraction is 13%, and the C capture fraction is less than 0.1%. 2. To select the ∼ 8 MeV delayed signal of neutron capture on Gd, a 7-9 MeV energy cut can be applied and this yields a 59% detection efficiency. A simulation of the delayed energy for the neutron capture on Gd events is shown in Figure 2 Integrating theν e energy in Eq. 2.1, the TAO detector will detect about 2000 IBD events per day in the fiducial volume based on the preliminary selection cuts. The selection cuts are still in the progress of optimization with the consideration of the detector design and the backgrounds. The observed number of IBD events is further reduced by the decrease of live time due to the application of cosmic-ray muon veto. The expected muon veto efficiency is larger than 90%. Backgrounds A background for IBDs consists of two events which pass the selection criteria but are not caused by the reactor antineutrinos. These two events, prompt and delayed, can be correlated or uncorrelated in time. Two uncorrelated but randomly close events passing the energy cuts form a so-called accidental background. Natural radioactivity is one of the major sources of the prompt events because it easily passes the > 0.9 MeV prompt energy cut but is impossible to pass the > 7 MeV delayed energy cut. This is also one of the reasons to load Gd in liquid scintillator for producing delayed events with energy ∼ 8 MeV. The neutrons in the environment or produced by cosmic muons are the major sources of delayed candidate if they are captured on Gd. Another type of background is correlated background if the prompt and delayed signals are correlated in space and time, such as the fast neutron background and 8 He/ 9 Li background. They are produced by the cosmic muons as spallation products. The fast neutron background, arising from cosmic muon spallation in the materials surrounding the detector, is the leading background component due to relatively small overburden of the TAO experiment. Fast neutron interactions are characterized by a prompt energy deposit due to recoiled proton, and by a delayed deposit due to the neutron capture upon thermalization. The muon rate in the experimental hall is about 70 Hz/m 2 , namely one third of the rate on the surface. A muon event generator is constructed to generate muons on the ground and a Geant4 simulation package is used to propagate the muon to the detector. The detector geometry, shown in Figure 1, is implemented in the simulation. The fast neutron background is selected with the same selection criteria as the IBD selection for the muon simulation sample. The fast neutron background rate is estimated to be 1880 events/day assuming no muon veto applied, similar to the IBD signal rate. To reduce the fast neutron background, the muon veto will be applied. The muon event rate, tagged by the veto detector, either by the water tank or the plastic scintillator, is about 4000 Hz. The veto time window cannot be as long as the criteria used in the Daya Bay (600 µs) [49] or JUNO experiment (1.5 ms) [2]. That would result in a 100% dead time. A preliminary 20 µs veto window is envisaged, which will introduce a less than 10% dead time. The veto time is sufficient to cut most of the fast neutron background. After applying the muon veto, the fast neutron background is reduced to less than 200 events/day. The muon veto detector is also functioning as shielding material to reject the neutron penetrating to the central detector. The shielding helps to reduce the rate of the fast neutron background produced outside of the veto detector without muon tagging, which contributes about 1/3 of the fast neutron background. The muon veto cut cannot reject some of the delayed signal because the neutron capture time is about 30 µs in average. Those signals in the energy range of 7-9 MeV without coincident prompt signal is called delayed-like signal which has a rate of 0.22 Hz and could form the delayed signal of an accidental background. Despite the short distance between the detector and the reactor core, the largest source of γ-rays is natural radioactivity, namely the concrete of the walls of the experimental hall, and the Printed Circuit Boards (PCBs) hosting the photosensors and the readout electronics. 40 K emission dominates the concrete induced γ-ray yield, while 238 U and 232 Th are responsible for most of the PCB-induced γ-ray flux. The former can be reduced by using a passive water shielding, while the latter needs to be controlled through careful material selection. We simulated events due to the radioactive elements contained in the concrete, PCB, stainless steel vessel, water, acrylic, GdLS and HDPE, and found the total event rate is below 100 Hz above 0.9 MeV with 1.2 m thick water shielding. The accidental background rate can be calculated by R d (1 − exp(−R p ∆T )), where R d is the rate of the delayed events, R p is the rate of the prompt events, and the ∆T is the coincidence time. With the inputs of R d = 0.22 Hz, R p = 100 Hz, and ∆T = 100 µs, the expected accidental event rate is 190/day, at a similar level of the fast neutron background. In the TAO liquid scintillator, the cosmic muons can interact with 12 C and produce radioactive isotopes. Among them, 9 Li and 8 He with half-lives of 0.178 s and 0.119 s, respectively, are the most serious correlated background source to IBD signals, because they can decay by emitting both an electron and a neutron which form the prompt and delayed coincident signals, respectively. The 9 Li and 8 He is often modeled empirically as being proportional to E 0.74 µ , where E µ is the average energy of the muons at the detector [50]. The production yield has been measured in the KamLAND detector [51]. At TAO experiment, the production yield can be extrapolated considering the average muon energy is 260 GeV for KamLAND and 8.4 GeV for TAO. The veto time window of 20 µs for fast neutron background is difficult to reject the 9 Li and 8 He because of their long half-lives. The production rates of 9 Li and 8 He in the fiducial volume of TAO detector is 45 and 9 per day, respectively. Table 2-1 summarizes some important results of the singles and background simulation. Energy resolution The energy resolution is a key parameter for the TAO experiment. It is predominantly determined by the statistics of the collected photoelectrons (p.e.). Compared with 1200 p.e./MeV in the JUNO experiment [2], the photoelectron yield of 4500 p.e./MeV is expected considering the following improvements. • The coverage of photon sensors is improved to ∼ 95% from 75% in JUNO. • The photon detection efficiency is improved to ∼ 50% using SiPMs, while it is ∼ 27% with PMTs in JUNO. • Smaller dimension of the TAO detector increases the photoelectron statistics by 40% due to less photons absorbed in the liquid scintillator. The expected energy resolution as a function of energy obtained from the TAO detector simulation is shown in Figure 2. It takes into account several detector effects: • Statistics: Preliminary Monte Carlo study shows that a photoelectron yield of about 4500 photoelectrons per MeV can be reached, providing the energy resolution required by the TAO physics goals. • Scintillator quenching: The quenched energy is simulated step by step in Geant4 using Birks' law [53] as where k B = 6.5 × 10 −3 g/cm 2 /MeV and C = 1.5 × 10 −6 g 2 /cm 4 /MeV 2 are Birks' constants, dE dx is the stopping power, ∆E is the deposited energy before quenching, and ∆E q is the quenched energy which is used to determine the mean value of the number of scintillation photons to be generated. The interaction processes for a particle in LS is random and the particle energy and deposited energy in each step fluctuate event-by-event. As a result, the fluctuation of the total quenched energy presents even for monoenergetic particles. • Charge resolution: The charge resolution of one SiPM channel is assumed to be 16% in the simulation. The number of the SiPM channels is about 4100. The energy resolution due to the SiPM charge resolution is 0.16/ √ N hit where N hit is the number of fired channels. N hit varies from 2800 to 4100 as a function of energy in the range of 1-10 MeV calculated from a toy MC. • Cross talk: Due to optical cross talk effect, a SiPM can generate multiple photoelectrons when there is only one real photoelectron. The number of generated superfluous photoelectrons (n) follows the following distribution, where the cross talk probability is c as a parameter of the SiPM properties. In simulation, the number of cross-talk photoelectrons fluctuates in each event. We subtract the average number of cross-talk photoelectrons while its fluctuation contributes to the energy resolution. The cross talk probability is assumed to be 10% based on the studies on the SiPMs described in Section 6. • Dark noise: Dark noise rate is about 250 kHz/mm 2 for a typical SiPM at room temperature. At -50 • C, it could be reduced by three orders of magnitudes to 100 Hz/mm 2 . The total area of the SiPMs is about 10 m 2 and the readout time window is set to be 1 µs here. The fluctuation of the number of dark noise photoelectrons is then √ 1000. This fluctuation affects the energy resolution in the energy reconstruction. It contributes a constant term to the energy resolution independent of the visible energy of physical events. • Neutron recoil: The kinetic energy of the neutron in the IBD reaction final state shares a part of the initial energy of the incident antineutrino, which introduces an energy smearing of the positron in the final state. Based on the kinetics of the IBD reaction, the energy spread of the positron is where ∆ np is the mass difference of the neutron and the proton, M p is the proton mass, and E ν is the energy of the incident antineutrino. The resulting energy resolution is ∆E/ √ 12, as an approximation of the standard deviation of a uniform distribution. However, a small fraction of neutron kinetic energy is detectable. The recoiled protons in the neutron thermalization process produce a small amount of scintillation photons that can mix with the photons produced by the positron. Considering the quenching effect of the recoiled proton, the energy resolution of neutron recoiling effect becomes (1 − Q f )∆E/ √ 12, where Q f is the average quenching factor for the IBD neutron, which is about 0.29 determined from simulation. This effect emerges during the IBD reaction and is relevant to the determination of the antineutrino energy instead of the detection of the visible energy. Systematic uncertainties The goal of the TAO experiment is to precisely measure the reactor antineutrino energy spectrum. The systematic uncertainties of the expected number of IBD events could be a few percent based on the experience of previous reactor antineutrino experiments [49]. The detection efficiency uncertainty dominates the uncertainty of the number of events. The uncertainties (rate uncertainty) independent on the antineutrino energy, which affect overall antineutrino rate, do not have an impact on the precision of the measured antineutrino spectral shape. The precision of the spectral shape measurement is driven by three main uncertainties. The first is the statistical uncertainty, uncorrelated between energy bins. The second is the energy scale uncertainty, determined by the uncertainties of the parameters in the energy scale model. The third uncertainty is induced by the fiducial volume cut, which distorts the energy spectrum due to the energy leakage. Figure 2-5 shows the impacts of the three major shape systematic uncertainties on the reactor antineutrino spectrum measurement. The IBD signal event rate is about 2000/day. Two (four) million events will be collected in three (six) years data taking. The statistical uncertainty from TAO is one source of the bin-to-bin shape uncertainty for JUNO when propagating the uncertainty of the reference spectrum from TAO to JUNO. It is smaller than 1% in the energy range of 2-5 MeV using the same bin width (35 keV) that used in the mass ordering analysis at JUNO [2]. The nonlinearity uncertainty is taken from Daya Bay [54] under the assumption of similar performance between TAO and Daya Bay using quite similar liquid scintillator. Due to the relatively small size of the TAO detector, a fraction of IBD events have energy leakage even with the 25 cm fiducial volume cut. The energy leakage is simulated using Geant4 and can also be investigated by comparing the events produced by the calibration sources placed at the edge of the detector in simulation. Different fiducial volume cuts correspond to different energy leakage and distorted energy spectra. A preliminary reconstruction algorithm shows < 5 cm vertex resolution and < 5 cm bias in radial direction can be obtained for the IBD events. With the assumption of 5 cm vertex resolution and bias, a toy MC is performed to propagate the uncertainty of the vertex to the simulated energy spectrum for events surviving the fiducial volume cut. The shape uncertainty due to the fiducial volume cut is estimated. As shown in Figure 2-5, the statistical uncertainty dominates in majority of the energy range. Overview The TAO experiment has two primary physics goals, • to provide a model-independent reference spectrum for JUNO, • to provide a benchmark to test nuclear databases by measuring the fine structure of the spectrum, together with other goals described in Sec. 1. The first goal requires <3%/ E(MeV) energy resolution and >10 statistics of JUNO, and is relatively easy to achieve. The second one has not specific requirement but prefers as high as possible energy resolution and statistics. The Central Detector (CD) of TAO is designed to have a fiducial mass of one ton Gadoliniumdoped Liquid Scintillator (GdLS). The reactor antineutrino event rate is more than 30 times higher than that of JUNO with selection efficiency included. The layout and support of the Silicon Photomultipliers (SiPMs) will be optimized to have as high as possible coverage. Three options are under consideration, with 94%, 95.5%, and 96.9% coverage, respectively. About 4500 p.e./MeV could be reached. TAO central detector is a two-layer detector, with an inner layer of 2.8 ton GdLS as the antineutrino target contained in a spherical acrylic vessel, and an outer layer of 3.45 ton Linear Alkylbenzene (LAB) as buffer liquid contained in a cylindrical Stainless Steel Tank (SST). SiPM Photosensor will be installed on a spherical copper shell wrapping the acrylic vessel, with 18 mm distance between the SiPM surface and the acrylic vessel. To reduce the dark noise of the SiPMs, the whole CD will be operated at -50 • C. GdLS for TAO is adapted from the Daya Bay GdLS [55] by reducing the fluor concentration and adding co-solvent to avoid fluor precipitation at low temperature. The carboxylic gadolinium complex is dissolved into LAB with a Gd mass fraction of 0.1%. The concentration of the fluor PPO is 2 g/L and that of wavelength shifter bis-MSB is 1 mg/L. To improve the solubility of the fluor and wavelength shifter, a co-solvent of 0.05% ethanol is added into GdLS. LAB can be used as non-scintillating buffer liquid and has no problem to work at -50 • C if the water content is removed carefully. The mechanical structures, cryostat design, and R&D of the low temperature GdLS will be described in the following. Mechanical structures The scheme of the central detector is shown in Figure 3-1. The 2.8 ton GdLS is filled in a 1.80 m diameter spherical acrylic vessel. After a 25-cm fiducial volume cut to reduce the energy spectrum distortion due to the gamma energy leakage into buffer liquid, the fiducial mass is 1 ton. The thickness of the acrylic vessel is 20 mm. A 12-mm thick spherical copper shell of an inner diameter of 1.882 m wraps the acrylic vessel and provides the mechanical support for the SiPM tiles and their electronics readout. About 4100 SiPM tiles, each of 50 × 50 mm 2 in dimension, are installed on the inner surface of the copper shell. Each tile consists of 8 × 8 array of 6 mm pixels, or 5 × 5 The Frontend Electronics (FEE) is in another PCB on the outer side of the copper shell. The copper shell is submerged in buffer liquid (LAB) contained in a cylindrical stainless steel tank of an inner diameter of 2.10 m and inner height of 2.20 m. The tank is wrapped with 20-cm insulation material, Polyurethane (PU) foam, to limit a heat leakage to be smaller than 500 W. The dimension chain of the CD is listed in Table 3-1. From the center of the CD to the outmost insulation support panel, the materials and structures are GdLS, liquid bag (optional), 20-mm thick acrylic vessel, an 18 mm gap filled with buffer liquid, 3-mm thick SiPM on PCB, 12mm thick copper shell, buffer liquid, 5-mm thick SST, 200-mm thick thermal insulation layer, and a 3-mm thick stainless steel panel that contain the insulation layer. The copper shell is spherical and the SST is cylindrical, therefore heights are shown starting from the buffer liquid. The height of the SST is 2200 mm, but the buffer liquid will be filled to 2100 mm level, leaving 100 mm space for liquid overflow and nitrogen cover. The buffer liquid inside and outside the copper shell is connected, so its weight is summed in one row, totaled 3.45 ton. The diameter of the CD is 2.506 m. The height is 2.626 m. And the total weight is about 9.9 ton, without flanges, possible reinforcing ribs, and other minor parts. The mechanical structure of the CD includes the SST, the support of the SiPM (i.e. the copper shell), and the acrylic vessel. They are described in the following together with the nitrogen system, cabling and piping, and assembly. Stainless steel tank The stainless steel tank is the outer vessel of the antineutrino detector of TAO. It provides mechanical support for all components inside the SST, as well as Automatic Calibration Unit (ACU, see Section 4) and overflow tank on the top of the lid. It also provides an air-tight environment for the liquid scintillator. Inside the SST, the temperature will be maintained at -50 • C, while outside the SST, it is at room temperature, so a layer of insulation is required. The SST is a cylindrical vessel made of SS 304. The dimension and requirements of SST are listed in Table 3-2. Constrained by the transportation passageways, especially the elevator to the underground laboratory in the basement of the building, which has a dimension of 1990 (Depth) × 1390 (Width) × 1990 (Height) mm, the SST has to be shipped in parts and welded together in the laboratory. The SST consists of the lid, the barrel, the bottom and the support. The lid and the bottom are both divided into 3 pieces. The barrel is divided into 6 pieces, as shown in Figure 3-2. These parts will be welded together in the underground laboratory. On the lid of the SST, there are an overflow tank, the buffer liquid inlet, and the coolant inlets/outlets. The overflow tank has a flange serving as the interface to the automatic calibration On the wall of the SST, the exact number and size of the cable feedthroughs depend on the SiPM readout scheme, which has a few options to be determined with further R&D. Preliminarily we have designed six DN160 flanges for mounting feedthroughs on the wall, which keep the possibility to lead 4100 channels out for all options. The support legs are welded at the bottom of the cylinder and will sit on blocks made from Fiber Reinforced Plastics (FRP) or titanium alloy to keep a good insulation. At the bottom of the SST, there are outlets for cleaning, and outlets for the buffer liquid and GdLS. An FRP cylinder is designed to support and reinforce the bottom panel. Another option is to use a support frame linking with the legs. The structure is shown in Figure 3 The big flange between the barrel and the lid will be challenging, since it has to be tailored due to the limitation of the transportation passageway (elevator). The flange will be welded onsite, with tools to control the deformation of the flange and the lid. The grooves will be polished after welding. Double O-ring is designed for the air-tight sealing. A couple of companies had been identified with existing experience. The structure of the lid is shown in Figure 3 The strength of the SST has been analyzed with the Finite Element Analysis (FEA) for all possible working conditions, such as empty tank, overturn, and during filling, at both room temperature and low temperature. Due to limited height of the laboratory, the copper shell and the acrylic vessel inside have to be installed into SST horizontally. Then, the assembled SST needs an overturn from horizontal position to vertical position. The strength of the SST is good enough, and the only attention needs be paid is the overturn operation. SiPM support In order to have a photosensor coverage close to 100%, design of the layout and support of the SiPMs is challenging. The support should have a good thermal conductivity since the heat produced by the readout as well as the SiPMs themselves need be transferred smoothly, so the working temperature of the SiPMs keeps stable. The support should be made with very low background material as it is only several centimeters from the GdLS. The support material should be compatible with the buffer liquid and should have good a mechanical strength to avoid deformation and damage to the SiPM tiles during the assembly, overturn (see Section 3.2.6), and lifting. Design of the support structure is shown in Figure 3-4. It is a shell structure made of about 1200 kg oxygen-free copper. The copper shell also supports and fixes the acrylic vessel through only two flanges on the top and at the bottom of the acrylic vessel. The 12-mm thickness is determined by both the mechanical strength and the thermal capacity to stabilize the temperature. The sphere is divided into 6 pieces. The upper and lower half sphere each consists of 3 pieces. They are bolted together through the reinforcing ribs on the edge of each piece. The tolerance of the diameter is designed to be ±2.5 mm. The deformation of the support structure is estimated to be about 2-4 mm for all working conditions. It is also possible to divide the copper shell latitudinally rather than longitudinally. In this case, eight pieces are required. Four big pieces are in ring shape and make up about 80% of the sphere. The other four small pieces combine as the flange cover for the top and the bottom, as shown in Figure 3-5. Manufacture of the copper rings will be more difficult but higher SiPM coverage could be achieved with this option. The PCB of each SiPM tile is fixed onto the copper shell with two bolts. The position tolerance of the bolt holes, expressed in angular precision, is 0.01 • . Due to the height limitation of the laboratory, the assembled copper shell (with SiPM tiles installed) has to be installed into the SST horizontally, both laid flat. Three guide rails are designed on the inner wall of the SST. After installation, the guide rails also fix the X and Y positions of the copper shell. A potential problem is the compatibility of the buffer LAB and the copper shell. Copper easily develops a green patina. Our compatibility test shows that copper can pollute the candidate buffer liquid LAB, especially when the copper surface is not clean enough. We also find that passivation of copper will apparently improve the compatibility. Further R&D is necessary to determine if the pollution is acceptable, if the passivation could be damaged during assembly, if there are better buffer liquids, etc. Deformation of the detector components when cooling down from room temperature to -50 • C must be seriously studied. The PCB of the SiPM tile contracts more than copper. The gap between adjacent PCBs will increase, thus no interference will happen. However, the connection between PCB and bolts may break since the largest contraction of PCBs will be 0.26 mm as the finite element analysis shown. A prototype test of a local model will be done to check the design and the assembly procedure of SiPM tiles. Acrylic vessel The reactor antineutrino target, GdLS, will be contained in an acrylic vessel of an inner diameter of 1800 mm and thickness of 20 mm. The thickness is possible to reduce by about 5 mm while the strenth is still enough. The acrylic vessel will be divided into 3 pieces, as shown in Figure 3 There is an upper chimney on the top of the acrylic vessel, connecting to the overflow tank by flanges and a bellow. The upper chimney is fixed in the chimney of the copper shell via jack bolts during installation, but is loosened during running. The bottom chimney connects to the GdLS outlet by a flange, and is fixed onto the copper shell via a clamp. The stress of the acrylic vessel has been analyzed with FEA. Acrylic material has been studied extensively by JUNO to construct its 35.4-m diameter acrylic vessel [1]. The maximum stress of the acrylic vessel should be less than 5 MPa for long term use to avoid crazing and cold flow, and could be relaxed to 10 MPa for short term (days). For the TAO acrylic vessel, the stress is small when empty and after filled. The maximum stress is found to be around the central supporting leg at the very bottom during filling with unequal liquid level inside and outside. The worst case happens when the inside (GdLS) liquid level is at the equator, which is 900 mm as we define 0 mm at the bottom of the acrylic vessel. For this case, the maximum stress of the acrylic vessel is shown in Figure 3-8 for different outside liquid levels. When no buffer liquid is filled, the stress is close to 16 MPa. When the buffer is fully filled (while the GdLS is still half filled), the stress is 12 MPa. To keep the stress being less than 10 MPa during the filling, the inner and outer liquid level difference should be controlled to be less than 400 mm. More FEAs have been done in order to optimize the thickness of the acrylic vessel. Both 20-mm and 15-mm thick acrylic vessels are analyzed to investigate the allowed liquid level difference during filling. Both options are safe enough during installation, cooling, and running, taking 5 MPa as the allowed stress. The filling process is considered as the worst case. Taking 10 MPa as the allowed stress, Figure 3-9(a) shows the allowed liquid level difference for GdLS at different level for 20-mm thick acrylic. The maximum allowed value is about 400 mm no matter the outside liquid level is above or below the inside one. If the thickness is reduced to 15 mm, the vessel is still safe when the liquid level difference is controlled to be less than 250 mm, as shown in Figure 3-9 The three pieces of acrylic components will be bonded together to form the spherical vessel in the laboratory. The bonding uses about 5 kg flammable liquid MMA monomer and the bonding process takes about 2 days, followed by annealing and polishing procedures. Such operation might carry certain fire risk. An alternative option has been considered with the acrylic pieces connected together without bonding, in case the bonding is not allowed in the laboratory close to the reactor core. In that case a liquid bag is required to be placed inside the acrylic vessel to contain the liquid. The acrylic vessel will provide mechanical support to the liquid bag. A similar scheme has been studied for JUNO and believed to be feasible after a prototype test and technical reviews [1], although it may carry larger risk than the current acrylic scheme of JUNO. Small adaption will be needed for TAO, mainly the flange connecting the liquid bag and the chimney of acrylic vessel. When using the alternative liquid bag option, the buffer liquid has to have a matched density with the GdLS. The density of the GdLS is 0.86 kg/L at room temperature. The buffer liquid should have density slightly lower than GdLS. LAB is a natural choice although it might be aggressive to Nitrogen system Water content in GdLS and buffer liquid (LAB) needs be reduced to very low level. Laboratory tests show that water in GdLS and buffer liquid should be less than 10 ppm and 5 ppm, respectively, in order to maintain transparency at -50 • C. When exposing to air, water vapor in air will be absorbed into GdLS and buffer liquid. Therefore, nitrogen protection and bubbling before filling is necessary to remove water. Nitrogen bubbling also helps to improve the light yield of the GdLS by about 13% and improve the pulse shape discrimination power by purging the oxygen. Radon in air of the laboratory could permeate into the detector, dissolve into the liquid scintillator, and produce backgrounds. Nitrogen cover will help to reduce the radon permeation. However, experiences from the Daya Bay experiment show that even moderate precaution is enough to reduce the backgrounds from radon to acceptable level, due to the high event rate at such a short baseline to reactor. During operation, the liquid has to be covered by flushing nitrogen to isolate the detector from water vapor and radon in air. One consideration is to use a circular pipe with many holes installed at the bottom of the SST to purge the water content in the buffer liquid, while the GdLS will be purged before filling into the acrylic vessel through an air-tight filling system. The inlet and outlet of nitrogen are designed on the lid of the SST and sealed with double O-ring flanges. Nitrogen will be provided either by liquid nitrogen bottles or with a small nitrogen generator. For the first option, the bottles need be shipped every two weeks during the experiment data taking. For the second option, maintenance of the generator might increase the workload in the power plant. Again, commercial purity of liquid nitrogen and moderate precaution is enough for TAO to reach acceptable level of radon background. Piping, cabling, and backend electronics box Cooling pipes are attached on the surface of the copper shell to effectively remove the heat generated by SiPMs and the readout electronics, so the temperature keep stable for the whole volume of GdLS in the acrylic vessel, with minimum convection of the buffer liquid. The pipes made of copper will be fixed on the copper shell by bolts with a certain pressure. They penetrate through the sidewall of SST, go down to the ground along the outside of SST, and finally link to the refrigerator. The layout and routing of the cooling pipes will be elaborated later, and could be further optimized with a prototype test. The SiPM tiles and the FEE boards are mounted on the inner and outer surfaces of the copper shell respectively, and both are connected by a high density connector (such as the Samtec connector). FEE and SiPM tile can be connected with the two PCBs parallel or orthogonal to each other. The readout cables of the FEE are routed also along the outer surface of the copper shell and connect to the Field Programmable Gate Array (FPGA) boards, which are installed in the upper part of the SST. The signals from the FPGA boards are read out by the Data Acquisition (DAQ) system via optical links through the feed-through on the sidewall of the SST. To reduce the number of cables penetrating the SST wall, it is preferred to keep the backend electronics in the tank. The signal cables of the 4100 SiPM tiles will be routed into three backend electronics boxes near the feedthroughs on the SST wall. The backend electronics including the FPGA usually produce a lot of heat. These boxes are designed to be thermally insulated, e.g. made of Teflon. Some branches of the cooling pipes will go through the insulated boxes to take the heat away directly. With these backend electronics boxes, the cable plugs can be accommodated in three flanges, although the readout scheme, discrete or ASIC, is to be determined. Assembly The TAO detector will be pre-assembled at the Institute of High Energy Physics (IHEP) in Beijing, then disassembled and shipped to the Taishan power plant, and assembled again there. Due to the transportation limit, all components can not be larger than 1.99 × 1.39 × 1.99 m in dimension. The SST and the acrylic vessel used for preassembly at IHEP will not be re-used since the cutting process will cause large distortion. So a new SST and a new acrylic vessel will be made for the final detector, while all other components will be reused. The onsite assembly should be as simple as possible to avoid logistic difficulties. The SST will be welded onsite from 6 pieces with a mould to control the deformation during welding and ensure the dimension precision. After the lid is sealed and the cabling and piping outside the SST are finished, the insulation layer wrapping the SST will be made onsite. A thin (about 3 mm) steel shell surrounds the SST and keeps 20-cm gap in-between, then the polyurethane is filled in the gap and then foams to the shell shape. The insulation layer for the bottom and lid of the SST can be made separately, and the bottom part should be ready before the SST is moved to the targeted position. More details can be found in Section 9. Low temperature control The low temperature system will lower the temperature inside the SST to -50 • C and keep it stable during data taking. The heat sources include the heat leakage from outside the SST (500 W), the heat produced by the SiPMs and the FEE readout (500-1000 W), and the heat from the backend electronics (< 1000 W). The heat generated inside the SST could be revised when the readout scheme is decided and tested. The light yield of the GdLS is a function of temperature, increasing about 0.35% per degree as temperature decreases [52]. To have a stable detector energy scale, the temperature fluctuation should be controlled within ±0.5 • C for GdLS in the acrylic vessel. The low temperature environment is realized with a cryostat and a cryogenerator. The cryostat includes the SST with insulation and the coiled pipe for coolant. The design goal is: • The temperature inside the SST is uniformly -50 • C, while keeping the ability of the cryogenerator to cool the SST down to -70 • C. The heat load inside the SST is < 2.5 kW. Stirrer is not allowed, and the disturbance of the liquid due to convection should be small. • The key requirement on the temperature uniformity is to keep ±0.5 • C inside the acrylic vessel (for GdLS). • The material, fabrication, and assembly should satisfy the cleanness requirements. • The cooling process from room temperature to -50 • C is about 2 weeks, which could be fine tuned by balancing the experimental requirements and the cost. A single layer SST with insulation, instead of a double-layer vacuum vessel, will be adopted since the cryostat has to be welded together from several pieces in the laboratory due to transportation limitation. 20-cm thick Polyurethane (PU) is chosen as the insulation material wrapping the SST. A layer of steel shell is required to form a mould outside of the SST and keep a 20-cm gap between them. PU foaming material will be injected into the gap to form the insulation layer. Heat production and cooling design There are three major sources of heat in the central detector, including • the heat leakage from environment. After insulation with 20-cm polyurethane for the tank and a polyethylene hat for the calibration device (ACU), the heat leakage is estimated to be 460 W; • the heat generated by SiPMs and the readout electronics on the copper shell, which is estimated to be 500 W (ASIC option) or 1000 W (discrete option); • the heat of about 1000 W from the backend electronics, mainly from FPGA, if they are installed inside SST. The thermal conductivity of Polyurethane is 0.03 W/(m·K). A simple calculation of the bulk heat leakage from environment at 25 • C to inside SST at -50 • C through 20-cm PU insulation is 225 W. A detailed thermal simulation for a SST model including the support legs and cable and pipe penetrations yields a heat leakage of 460 W. Inside the detector, the heat sources include SiPMs, the frontend electronics, and the backend electronics (see Sections 6.5.2 and 6.5.3). The heat generation is expected to be stable and uniform on the surface of the supporting structure, the copper shell. Since the temperature variation has apparent impacts to the light yield of the GdLS, the core requirement of the cryogenic system is to keep the temperature of the GdLS stable and uniform to ±0.5 • C. The convection of thick liquid at low temperature may have impacts to the light transmission. The heat source and the cooling source should be as close as possible to take the heat away immediately. Therefore, corresponding to the three heat sources, the coolant coiled pipes are designed to have three groups. One group is attached to the SST wall, lid, and bottom to take away the heat leaking from environment. Another group is attached on the surface of the copper shell to take away the heat generated by the FEE. The third group will pass the insulated boxes that contain the backend electronics. The copper shell is divided into several pieces and shipped into the underground laboratory. The cooling pipes need to follow this design. To avoid interference with the frontend electronics mounted on the copper shell, the maximum diameter of the cooling pipes should be less than 20 mm. The layout of the cooling pipes on the copper shell is shown in Figure 3 The whole central detector will operate at -50 • C. We have evaluated other cooling options but considered them as less feasible. One of those options is to keep the copper shell and the SiPMs at -50 • C but GdLS at 20 • C. We would not need to worry about the water content and fluor solubility in GdLS then. The thermal conductivity of acrylic is 0.19 W/(m·K). Suppose we use 10-cm thick acrylic vessel to separate the GdLS and the SiPMs with good thermal insulation, the heat flow passing the acrylic is 1330 W. This heat capacity should be compensated either by circulating the GdLS with heat feeding, or by adding an interlayer with liquid flow to feed the heat. Detailed analyses were done with ANSYS Steady State Thermal and Fluent software. The thermal conductivity of LAB at 23 • C is measured to be 0.1426 W/(m·K) [56]. Another measurement has been done for low temperatures, which shows 0.139 W/(m·K) at -50 • C as listed in Table 3-4. GdLS properties in the simulation have been taken as the same as LAB. It is found that the temperature of GdLS is hard to be uniform in the acrylic vessel. The temperature differences are 5.5 degree and 3 degree for the above two options, respectively. A hybrid option to keep the GdLS at -30 • C is possible to satisfy the uniformity requirement of ±0.5 • C, but still too complex to be feasible and reliable. Therefore, the option to have the whole detector working at -50 • C is chosen, while the R&D efforts are put on developing a stable low temperature GdLS. Copper shell vs stainless steel shell A 12-mm thick spherical copper shell provides the mechanical support for the SiPM tiles and their electronics readout. It also serves as a temperature stabilizer to provide stable and uniform temperature for GdLS inside the acrylic vessel and SiPMs on the inner surface of the shell structure. We require the uniformity of temperature inside the acrylic vessel and working temperature of the SiPM both to be ±0.5 degree to get uniform energy scale of the detector. Stainless steel shell is also considered as its strength is better than copper and manufacturing is easier. However, its heat conductivity is about 15 W/(m·K, more than 20 times smaller than that of copper. The temperature field is analyzed for steady state. A heat of 50 W/m 2 is uniformly loaded on the copper or stainless steel shell. The coiled coolant pipe has 14 layers, attached on the outer surface of the shell. The coolant temperature is set to be -52 • C. As shown in the left of Figure 3-11, the temperature of the bulk GdLS is -50.8 • C, while the temperature range is −50.8 ± 0.5 • C when the distance is 50 mm from the stainless steel shell of a thickness of 15 mm. On the edge of the acrylic vessel, some of the GdLS has larger temperature variation than required. Although events in this area will be rejected by the fiducial volume cut, the temperature variation might have certain impacts to physics. Also, temperature difference is undesirable as it may cause GdLS convection, whose impact is unknown, especially for the stick liquid at low temperature. In the right of Figure 3-11, the temperature field for the 12 mm thick copper shell with the same setting is shown. The temperature is very uniform and all GdLS has a temperature of -52 • C. A special transient state analysis has also been done to simulate the non-uniform heat load, e.g. some electronics generate more heat or even short-circuited. A heat of 500 W/m 2 is loaded in an area of 0.5 m 2 on the shell, as an extreme case. Large temperature variation is observed for the stainless steel shell design, and it takes 10 hours to reach a steady state. For the copper shell design, the temperature difference is only 0.4 degree, and it takes 10 minutes to reach a steady state. Refrigerator system The scheme of the refrigerator system is shown in Figure 3-12, including the refrigerator, heat exchanger, coiled coolant pipe, heater, meters, Programmable Logic Controller (PLC), and the cryostat (the detector). The coolant is cooled by the heat exchanger, and driven by a magnetic drive pump. To control the temperature of the coolant, an electric heater is designed before the heat exchanger. The temperature control is realized by changing the power of the heater. Silicone oil is cooled in the heat exchanger and takes away the heat in the detector. The cycling pipes connecting with the SST and the refrigerator are vacuum cooling pipes made of stainless steel, so the diameter of the pipe could be as small as 50 mm. Otherwise, normal pipe must be wrapped with insulation layer up to about 10 cm. All coolant pipes outside the detector need be well insulated and avoid water condensation. The heat exchanger is made of AISI-316 stainless steel. The electric heater is controlled with a Proportional Integral Derivative (PID) controller. The secondary refrigerant is silicone oil, which has a broad work temperature from -80 • C to 195 • C. The silicone oil has good temperature stability, good thermal conductivity, and is chemical inert, non-toxic, and environmentally friendly. The cooling capacity is taken with 1.5 times margin, about 3.5 kW at -50 • C. The cooling capacity of the refrigerator chosen for the prototype experiment is shown in Table 3-3. Temperature monitoring The temperature in the detector must be measured and monitored during the whole process of the experiment running. The temperature data in the SST is crucial for the control of the refrigerator system, and temperature data outside the SST is also important to estimate and monitor the heat leakage in different areas. The temperature sensors will be set uniformly on the outer surface of copper shell, SST and PU insulation to monitor the full detector system. Number and locations of the temperature sensors are still under optimization, and will be verified with the prototype detector. The performance of the cryogenic system has been tested in a prototype for JUNO [57]. After implemented with 20-cm insulation layer on the outer tank, installed cooling pipes and a refrigerator system, replaced the LS and water to TAO GdLS and buffer LAB, and instrumented with 10 temperature sensors, the prototype has been adapted as a low temperature liquid scintillator detector of 70 L GdLS and about 6 ton buffer LAB. The prototype has been successfully cooled down from 20 • C to -50 • C. The heat load (basically from the heat leakage of the insulation) is found to be about 500 W as expected. The temperature non-uniformity in the 1.8-m cylinder space has reached 1.3 • C and the temperature stability is 0.02 • C in 20 days. Liquid scintillator Gadolinium-doped liquid scintillator (GdLS) will be used as the neutrino target of TAO to register a clean delayed signal of IBD from the neutron capture on Gd and thus reduce the accidental background. To lower the dark noise of SiPM to 100 Hz/mm 2 as required by TAO, the GdLS and the buffer liquid should work at -50 • C or lower. The GdLS for TAO should have good • transparency at -50 • C; • light yield at -50 • C; • chemical stability for several years; • safety (fire risk, toxicity, environmental safety, etc.). Linear Alkylbenzene is used in Daya Bay and JUNO as the solvent of the liquid scintillator. It has advantages on all above requirements by TAO GdLS. Especially it has a high flash point, > 130 • C, thus very suitable for using near the reactor. Commercially available LAB is a mixture of carbon number in the linear chain ranging from 9 to 14 in general. It has a freezing point lower than -60 • C, but its water content and the fluor in LAB-based LS may precipitate at low temperature. By tuning the recipe of the Daya Bay GdLS and adding co-solvent, we have developed the GdLS (and undoped LS) that works well at low temperature. Recipe LAB has a freezing point lower than -60 • C, although it depends on particular sample given it is a mixture. The viscosity, density, specific heat capacity, and thermal conductivity of a typical LAB sample used for JUNO have been measured at different temperatures as shown in Table 3-4. Table 3 Although still in liquid state, normal LAB may turn cloudy at low temperature due to the water content in it. At room temperature, the water content in LAB is often at ∼ 40 ppm, while the saturated content is ∼ 200 ppm. The transparency of the liquid sample at low temperature was measured with a customized apparatus shown in Figure 3-13. When the precipitate appears, the light received by Charge Coupled Device (CCD) will reduce due to scattering of the incident light. Furthermore, the edge of the light spot on CCD will turn fuzzy. We found that LAB will keep clear if the water content is less than 5 ppm. Water in LAB can be removed by bubbling with dry nitrogen. Di-isopropylnaphthalene (DIN) as another solvent candidate also has high flash point. The NEOS experiment adds 10% DIN into LAB-based liquid scintillator and gets better pulse shape discrimination [23], which is attractive for an experiment at shallow overburden to reject fast neutron background. However, DIN itself turns cloudy starting from -20 • C. Adding 10% DIN into LAB also degrade the transparency. Further R&D will be done on this option for lower fraction of DIN. Another solvent candidate, pseudocumene, has a flashing point of about 40 • C (inflammable) and a freezing point of -43.78 • C, not suitable for TAO. Daya Bay liquid scintillator has 3 g/L PPO as fluor and 15 mg/L bis-MSB as wavelength shifter [55]. New R&D for JUNO using one Daya Bay detector [58] shows that 1 to 3 mg/L of bis-MSB will have the highest light output for JUNO. For most detectors of various size, 3 mg/L bis-MSB is enough, slightly depending on the purity of the solvent. Higher concentration of bis-MSB may slightly reduce the light yield if the solvent has very little absorption like JUNO, but may have advantages by quickly converting the light to longer wavelength, if the solvent or other components in LS have absorption at the emission wavelength of PPO, which also can improve the photon collection efficiency. The bis-MSB emission spectrum and the typical SiPM Photon Detection Efficiency (PDE) as a function of wavelength overlaps well, as shown in Figure 3-14. At low temperature, the solute PPO and bis-MSB in LS may precipitate. We found that the solubility of PPO in LAB at -50 • C is between 1 and 1.2 g/L. The absorption of the solution suddenly increases when the temperature decreases from -40 • C to -50 • C, as shown in Figure 3 However, Liquid scintillator with 1 g/L PPO and 0.2 mg/L bis-MSB has significantly lower light yield comparing to the optimal recipe for JUNO with 2.5 g/L PPO and 1-3 mg/L bis-MSB. Alcohol has low freezing point and is lipophilic. Addition of a small fraction of alcohol into LS will increase the solubility of PPO and bis-MSB, thus cure the light yield problem at low temperature, as shown in Figure 3-16. Besides ethanol, less volatile pentanol is also studied but found to be not compatible with the reagent of the gadolinium. As a conclusion, the recipe of the TAO GdLS is chosen to be LAB plus 2 g/L PPO and 1 mg/L bis-MSB, with 0.1% mass fraction of gadolinium using Daya Bay's Gd-complex [55], and ∼ 0.05% ethanol as co-solvent. Since the GdLS needs the water to be removed by bubbling with nitrogen and ethanol is volatile, we will add more ethanol, e.g. 0.5%, at the beginning and will monitor the ethanol content during operation with a sensor in the overflow tank of the detector. The Gdcomplex, a complex of GdCl 3 and carboxylic acid, contains a fraction of water molecule in the complex. The water content in GdLS can be higher than in LS without Gd-doping to keep clear at low temperature. We require that the water content should be lower than 10 ppm in TAO GdLS, and lower than 5 ppm in LAB as buffer liquid. The absorption is shown in Figure 3 Liquid scintillator light yield Adding co-solvent may have impact to the light yield. We measured the light yield of a LS referred as "JUNO-TAO-LS" (LAB + 2 g/L PPO + 1 mg/L bis-MSB + 0.05 % ethanol), which has the same recipe as the TAO GdLS but without Gd, relative to the JUNO LS (LAB + 2.5 g/L PPO + 3 mg/L bis-MSB) with a customized apparatus, in which six PMTs are coupled to a cubic acrylic vessel of 2-cm dimension. Coincidence of 6 PMTs reduces the noise significantly. The internal conversion electron of a 207 Bi source is used to measure the light yield, as shown in Figure 3-17. The gain of PMTs at low temperature has been calibrated. We find that the relative light yield of the TAO LS is 96% of that of JUNO at room temperature. The light yield of liquid scintillator usually increases at lower temperature due to less thermal quenching [52]. For low temperature, the light yield of TAO GdLS is still under study. The nonlinear energy response of TAO GdLS will be studied in laboratory with a similar apparatus in Ref. [59] but placed in a low temperature cryostat. Production and filling The production of TAO GdLS will be similar to the Daya Bay GdLS [55]. However, we need control the water content and ethanol content in GdLS, which is actually challenging since ethanol is volatile, and an exposure to air will increase the water content in GdLS. The production and filling thus need special caution. Several methods to remove water have been studied. Nitrogen bubbling is the preferred method, without risks to pollute the very transparent GdLS. A sample of 1 L LS was bubbled with high purity dry nitrogen of 2 L/min. The water content is measured with a Karl Fisher equipment, as shown in Figure 3-18. After two hours of bubbling, the water content is stable at 5 ppm, and keeps stable after being sealed and covered with nitrogen. The ethanol content in GdLS needs be monitored during operation since the flow nitrogen cover may take the ethanol away. An ethanol sensor will be installed in the overflow tank for that purpose. With the partition coefficient of ethanol content in air and in GdLS, the measurement in nitrogen cover can be converted to the ethanol content in GdLS. Details will be worked out later, such as the selection of the sensor and the measurement of partition coefficient at low temperature, etc. GdLS is compatible with acrylic vessel and relevant materials used for TAO, given the experience from Daya Bay [60]. The compatibility of LAB as the buffer liquid and detector materials needs be studied later. Requirements The precise measurement of reactor antineutrino spectrum in the TAO experiment requires extreme care in the characterization of the detector properties as well as frequent monitoring of the detector performance. The requirements for the calibration of the detector response are listed in Table 4-1. These can be met by a comprehensive automated calibration program with LED light sources and radioactive sources deployed into the detector volume at regular intervals (i.e., daily or weekly). Radioactive isotopes and spallation neutrons produced by cosmic muons in TAO provide additional full-volume calibration data. One Automatic Calibration Unit (ACU) of the Daya Bay experiment will be reused in TAO after the Daya Bay decommissioning. Similar calibration procedures will be employed in TAO and the energy scale calibration is expected to reach similar precision as Daya Bay (< 1.0%) [54]. The spatial uniformity has impact on the energy resolution in the energy reconstruction. To achieve a sub-percent energy resolution in most of the energy region, a 0.5% spatial uniformity is required. Radioactive sources The main goal of TAO is to precisely measure the antineutrino energy spectrum via inverse betadecay reaction (IBD). The prompt signal, as a proxy for the antineutrino energy, is produced by positron in the IBD reaction. The positron ionization and subsequent annihilation gammas produce signal in the energy region of 1-10 MeV. The delayed signal is produced by the gammas emitted from neutron captures on Gd. No monoenergetic positron calibration source is available and a set of gamma sources are used to characterize the detector response. It is necessary to characterize the detector properties carefully before data taking and monitor the stability of the detectors during the whole data taking period. Calibration sources must be deployed regularly throughout the active volume of the detector to simulate and monitor the detector response to positrons, neutron capture gammas and gammas from the environment. The sources available for calibrations in TAO are listed in Table 4-2. These sources cover the energy range from about 0.5 MeV to 10 MeV and thus can be used for a thorough energy calibration. The positron produced by the 22 Na and 68 Ge source will be absorbed by the enclosure and only the two Editors: Zhimin Wang (wangzhm@ihep.ac.cn) and Liang Zhan (zhanl@ihep.ac.cn) annihilation gammas will be released. Dissolving the positron source in a small scintillator volume filled in a transparent vessel is also in consideration. The detector response depends on the particle species. Conversion of the gamma response to the positron response can be determined from the simulation. Similar conversion procedure based on the experience in Ref. [54] will be performed for TAO and the related uncertainty will be investigated. Neutron sources simulates the delayed signal of IBD reactions at the calibrated points. The Am-Be source can be used to calibrate the detection efficiency of neutron capture signal by detecting the 4.43 MeV gamma in coincidence with the neutron. The 238 Pu-13 C source will similarly provide a 6.13 MeV gamma as a prompt signal in coincidence with the delayed neutron. The neutron detection efficiency can be determined with neutron sources by measurement of the energy spectrum for neutron capture signals. In addition, neutron sources allow us to determine the appropriate selection cut for IBD coincidence time by measurement of the neutron capture time. The positron annihilation signal in IBD reactions can be simulated by positron sources. Radioactive sources must be encapsulated in small containers to prevent any possible contamination of the ultra-pure liquid scintillator. They can be regularly deployed to the whole active volume in Z axis of the detector. Calibration data are used to benchmark the Monte Carlo (MC), then MC will be used to predict the IBD responses. As an example, the simulated detector response (deposited energy) for the 8 MeV n-Gd capture signals throughout the detector volume is shown in Figure 4-1. Regular monitoring of the fullvolume response for these events, compared with the regular automated source deployments, will provide precise information on the stability of the detector, particularly the optical properties . With the help of MC simulations, this comparison can be used to assess the detection efficiency and its stability. Cosmic muons passing through the detector produces short-lived radioactive isotopes and spallation neutrons. These events will follow the muon signal (detected in the muon system as well as The alpha rate, depending on the actual GdLS radioactivity, is estimated assuming that the radioactivity in TAO is similar to Daya Bay. The rates of these events for TAO are given in Table 4-3. ∼ 6000 alpha ∼ 1000 Figure 4-2 shows the measured scintillator nonlinearity in the energy scale calibration from the Daya Bay experiment [54]. Not all of the sources in Table 4 LED source LEDs have proven to be reliable and stable light sources that can generate fast pulses down to nano-second widths at similar wavelengths (420 nm) to the scintillation light in liquid scintillator. Especially, photons are emitted from the LED promptly, comparing to an exponential decay nature of the scintillation light. Therefore, the LED light source is usually an essential component of the detector calibration system. The most important function of the LED light source is the timing alignment of all SiPM readout channels. High intensity light pulses from the LED diffuser ball at the detector center can be used to get the alignment constants of all channels. Timing is usually a function of the charge, depending on both the design of the electronics and the time resolution of the SiPMs. In addition to the time alignment constants, the relationship of the timing and charge, the so-called time walk curve, can be obtained by varying the light intensity of the LED. The gain of the SiPMs can be monitored by regularly deploying the LED diffuser ball at the detector center, and generating low intensity light pulses corresponding single photoelectron (p.e.) signals at each readout channel. Since SiPMs have very good charge resolution, multiple p.e. signals from the calibration or physical events can also be used to calibrate the gain of each channel, as shown in Figure 6-15 and Figure 6-22. An example of the LED source from Daya Bay [61] is showing in Figure 4-3. The LED is potted in a nylon diffuser ball which is then encapsulated in an acrylic enclosure, to avoid contamination of the liquid scintillator. A blue laser diode is under consideration as an alternative of the LED to achieve better time calibration. The deployment cable has to be changed from electrical to optical fiber comparing to the Daya Bay practice, and the reliability at low temperature need further R&D. Automated calibration unit The detector will be instrumented with an Automated Calibration Unit (ACU) recycled from the Daya Bay experiment [61]. The automated deployment system will be used to monitor the detector on a routine basis, perhaps daily or weekly, and will allow the full z-axis access inside the detector. The ACU system will be located above a single port on the top of the detector vessel, as shown in Figure 4-4. Three source assemblies (potentially containing several isotopes in each) can be lowered into liquid scintillator along the central axis of the detector volume, one assembly at a time. This will be facilitated by 3 independent stepping-motor driven source deployment units all mounted on a common turntable. The turntable and deployment units will all be enclosed in a sealed stainless steel bell jar. The schematic drawing of ACU is shown in Figure 4-5 and a photo of the inside turntable is shown in Figure 4-6. All internal components have been certified to be compatible with the liquid scintillator. The deployment systems will be operated under computer-automated control in coordination with the data acquisition system (to facilitate separation of source monitoring data from physics data). Each source can be withdrawn into a shielded enclosure on the turntable for storage. The source assemblies can be deployed at a speed ∼ 7 mm/s and the deployed source position will be known to better than 2 mm. We plan to include a LED with diffuser ball on the deployment axis 1 in ACU, a 68 Ge (positron) in the axis 2, and a combined source of 60 Co (γ) and neutron ( 252 Cf, Am-Be, or 238 Pu-13 C) in axis 3 in regular calibration, similar to Daya Bay. These sources can be deployed in sequence into the detector. During automated calibration or monitoring periods, only one source (or a combined source) would be deployed in detector at a time. Simulation studies indicate that we can use these regular automated source deployments to track and compensate for changes in 1) average gain of the detector (photoelectron yield per MeV); 2) Number of operative SiPMs; 3) scintillation light attenuation length, as well as other optical properties of the detector system. Simulation studies are in progress to determine if more sources could be combined into the three source assemblies. Making use of the excellent energy resolution of TAO, difference sources in one assembly might be able to calibrate the detector simultaneously with negligible impacts to the lower energy peak from the tail of higher energy peaks. The minimal number of locations necessary to sufficiently characterize the detector spatial uniformity is also under investigation. The automated deployment of sources will be scheduled in sequence, and is expected to take 1-2 hours for each calibration run. Physical data acquisition will be suspended during this period, and these data runs will be designated as calibration runs. For some special calibration runs, we can configure the ACU to change the regular sources. The ACU control system will need to communicate and coordinate with the data acquisition system during these calibration runs so that all the data are properly recorded and labeled. The ACU is originally designed to be operating at room temperature. In TAO experiment, the ACU has to be operating at -50 circ C. It is known that the O-ring material is not suitable to work at low temperature and will be replaced. We will test the ACU system in a cryogenic box with a volume of 1 m 3 , and then in a prototype detector. Calibration strategy The selection of the radioactive sources, the calibration points for spatial uniformity, the activity of radioactive sources, and the calibration run time are being studied with simulations. SiPM parameters Although SiPM parameters will be measured in bench test as described in Section 6, the calibration in detector is important for redundancy and monitoring. The SiPM gains for each channel can be calibrated with the single p.e. spectrum using LED or SiPM dark noises. To avoid the contamination of multiple p.e. signal in the single p.e. spectrum, LED can be configured at sub-Poissonian intensity level, such as an average of 0.1 p.e. for each channel. Gain stability will be monitored at regular intervals during calibration runs. An alternative method is to use dark noises which is usually single p.e. signals. The dark noises can be accumulated during the normal physics running without interruption by the regular calibration. The SiPM timing calibration requires fast LED light pulse with a width better than 1 ns level, as already achieved in Daya Bay. The time alignment constants are mainly determined by the readout electronics and the cable length and remains stable over long periods. High intensity light pulse is necessary to ensure all readout channel are fired simultaneously after correcting the time of flight. This calibration can be done at a very low frequency. The relative PDE might be able to be monitored from the p.e. yield of each SiPM, if the uniformity of the diffuse ball is good enough. Isotopic light from the diffuse ball in the central axis then could provide "identical" photon inputs for SiPM. Calibration sources at the detector center is an alternative choice if the isotropy of the LED diffuse ball cannot meet the requirement. The optical cross talk of each SiPM can be determined by measurement of the p.e. spectrum. The number of p.e. of each SiPM should follow the Poisson distribution from a pure statistics. The deviation from the Poisson distribution will provide the calibration of the cross talk. Spatial uniformity The spatial uniformity is determined by the detector geometry and its optical properties and is studied with a preliminary Geant4 simulation. The scintillation light is collected by the array of the SiPM tiles as shown in Figure 8-3. The detector is filled with GdLS in a spherical acrylic vessel of a diameter of 1.80 m and a thickness of 2 cm. The acrylic vessel is surrounded by fully covered SiPM array and dipped in LAB buffer, where the SiPM surface has 15 mm distance to the acrylic. Both at the top and bottom of the GdLS sphere, the SiPM is removed because of a 15 cm diameter calibration chimney located at the top of the detector and an acrylic supporting leg of 10 cm diameter at the bottom. Due to the geometry mismatch of the spherical vessel and the rectangle SiPM tiles, the photo-coverage around the poles is much less than that at the equator. The chimney and the supporting leg as well as the photo-coverage non-uniformity affect the optical asymmetry of the detector. We simulate 1 MeV electrons along the X/Y/Z axes from the center to the edge of the GdLS volume as shown in Figure 4-7. The events along the X and Y axes have more photoelectrons comparing to the events along the Z axis, which is caused by the calibration chimney and the supporting leg of the acrylic vessel and the sparser SiPM coverage near the poles. The nonuniformity shown in Figure 4-7 corresponds to the SiPM layout of option A in Figure 3-6. Further evaluation of the layout options, possibly with non-square SiPM tiles around the poles, is under way to improve the uniformity. Based on the large non-uniformity of the results, it seems very demanding to deduce the spatial dependence of the energy response based solely on the source calibration points located along the detector z-axis. The proposed solution is in two directions: 1) Try to reduce the asymmetry between X/Y and Z axes by adjusting the SiPM arrangement, especially around the calibration chimney and the supporting leg. 2) To combine the calibration results along Z axis and the uniformly distributed events (e.g. spallation neutron, alpha radioactivity from 212 Bi or 214 Bi in GdLS) to model the full volume response. Such a mapping could be very effective since the vertex could be reconstructed with a resolution of better than 5 cm. For the positions at the detector edge, the calibration suffers from large energy leakage. A proper calibration source will be selected with more simulations to determine the spatial uniformity instead of using the virtual source of 1 MeV electron. The energy leakage is important for the signal of neutron capture on Gd. As shown in Figure 2-2, the detection efficiency of IBD delayed signal is only 59% due to the large tail below 8 MeV. The energy leakage is studied for the IBD positron and it will bring a systematical error described in Section 2. Calibration at the detector edge is essential to understand the energy leakage effect and spatial uniformity. Energy scale The energy scale is nonlinear due to scintillator quenching effect of liquid scintillator. The energy linearity response for different energy and particle types is essential to reach a precise neutrino spectrum measurement. Figure 4-8 shows the energy nonlinearity models, including the gamma calibration curve and electron spectrum from 12 B [54]. TAO will use the similar calibration procedure as the Daya Bay and is expected to achieve similar precision. Charge pattern The charge pattern calibration is crucial for the event vertex calibration to produce the expected charge distribution for all channels, which is a coupled effect of the PDE and position dependent acceptance of the SiPMs. During the spatial uniformity calibration, the charge pattern can be obtained simultaneously by relative measurement of p.e. output in each channel. The charge pattern can also be obtained from Geant4 simulation after the matching of the simulated data with calibration data placed at different positions. Quality assurance The assembled automated system has been fully tested at Daya Bay. Positioning accuracy of 2 mm at room temperature, reliability, and fail-safety of interlocks have been well established. Radioactive sources have been tested to certify that they are leak-tight. Activity of each source will be measured and documented. The ACU system and the selected sources will be tested in a cryogenic box and then in a prototype detector. ES&H The calibration system does not involve flammable materials or gases, high voltage, or other hazards. The radioactive sources are of very low activity, typically 1000 Bq or less, and will be operated in a shielded environment so that they do not represent a hazard to humans. Personnel involved in the installation and testing of the sources will need to be properly trained and monitored, but the dose rates will be extremely low, of an order of µrem/hr. Risk assessment The primary risk associated with the calibration systems is the interface with the detector and the reliability when working at -50 • C. Interlocks must insure that the pressure in the calibration system is equalized with the detector before deploying a source. The sources and materials must be tested to be compatible with the liquid scintillator to avoid contamination of the detector. A rigorous safety measurement in the Daya Bay experiment shows the reliability of the ACU and source design [61]. R&D The LED timing precision still needs to be further investigated to satisfy the ∼1 ns requirement. The laser and optical fiber are considered as an alternative light source to the LED source, which has a better timing resolution of < 0.5 ns. While considering the limited number of sources which can be included in the ACU in regular calibration, we need further study to finalize which sources, what activities, and how to combine the sources. At the same time, considering the unprecedented energy precision requirement, we need further understanding about the source package effect to the energy calibration as well as the source-related backgrounds. The spatial non-uniformity is large with the current detector design. Solutions should be investigated carefully. The strategy of the regular and special calibration should be designed to balance the physical data taking time the calibration performance. Requirements and baseline design The main backgrounds for the TAO experiment are induced by cosmic-ray muons' spallation products and by accidental coincidences mostly due to the natural radioactivity. In order to reduce the portion of these backgrounds originating outside the central detector (CD), a shielding system surrounding the detector will be used. The system should provide enough shielding while coping with the limitations of the space in the experimental hall. Moreover, it has to allow an easy access to the CD area for the installation and later for eventual maintenance. The cosmogenic background produced in the CD or in its closest vicinity can be further reduced by a detection of passing muons and application of a short veto time in the CD. The veto system for muon detection will be therefore used in TAO. Only a short veto of ∼ 20 µs after muon is envisaged due to the relatively high muon rate of 70 Hz/m 2 in the underground hall, which still rejects most of the muon associated background, including fast neutrons, without introducing excessive dead time. The veto will be applied offline providing an opportunity for careful studies and optimization of its performance. The veto system should provide 90% muon tagging efficiency with an uncertainty < 0.5% and some muon track information to sufficiently reduce the muon induced background to the level determined by physics studies. The background from untagged muons (due to tagging inefficiency) can be estimated and subtracted statistically with a small uncertainty by measuring the spectrum of tagged background events and together with a precise knowledge of the tagging efficiency. Reaching this performance, the veto system design still has to meet the limited dimensions of the experimental site. The side view of the veto detector has been shown schematically in Figure 1 in the Executive Summary section. The CD is shielded by 1.2 m water in the surrounding water tanks, 1 m High Density Polyethylene on the top, and 10 cm lead at the bottom. The water tanks, instrumented with Photomultipliers (shown by red circles), and the Plastic Scintillator (PS) on the top comprise the active muon veto system. At least one side of the tanks is movable to have access for assembly and maintenance. The exact shape of the water tanks is under optimization, keeping in mind that the maximum dimension should be < 5.1 m to fit in the laboratory. Figure 5-1 shows the octagon option which is less spacious with respect to the rectangle option. The top and bottom shielding is solid material instead of water due to space and installation considerations. A minimum thickness of 1.2 m of water or equivalent thickness of other materials is required to sufficiently suppress the gammas and neutrons from the natural radioactivity. To serve as an active tagging system, the water tanks will be instrumented with 3-inch Photomultiplier Tubes (PMTs) to detect Cherenkov photons from muons going through the water. It can achieve a ≥ 90% efficiency for muons passing through the water tank volume with a 0.8% photocoverage. In addition, few layers of an instrumented plastic scintillator will be placed above the CD and water tanks to detect the muon from top and further improve the muon detection and tracking, providing an independent ≥ 90% muon detection efficiency. Exact design of the layers is still under investigation. The combined efficiency of the veto systems has to exceed 90%, with an uncertainty < 0.5%, to satisfy the experimental requirements. The muon rate in the veto detectors is estimated to be about 4000 Hz. Tab. 5-1 shows the expected event rates with and without shielding and muon veto time application. Background rates will be reduced to ∼ 10% of the IBD rate with the shielding and veto systems, which is sufficient to satisfy the TAO physical requirements. Water tank: shielding and Cherenkov detector The CD will be surrounded from sides by the water tanks with a thickness of 1.2 m as a passive shielding and an active Cherenkov muon detector. The water tanks serve several crucial purposes. (i) Fast-neutron background originating from the cosmic-ray muons will be significantly reduced by such a shielding. A Monte Carlo simulation shows that the rate of fast-neutron background produced outside the detector will be reduced by a factor of 1.5-2.0 by 50 cm of water. (ii) Pure water effectively reduces the accidental background rate associated with γ rays since low-energy γ ray flux is reduced by a factor of ten by 50 cm of water. (iii) The water tanks will be instrumented with PMTs for the detection of the Cherenkov light induced by passing muons. Water tanks have been used as shielding of a JUNO prototype detector of similar size as the TAO detector, as shown in Figure 5-2, and have been successfully operated for several years. They are commercially available and can be easily adapted to different geometries at low cost. To save space in the Taishan Neutrino Laboratory, rectangular, circular, and octagon shape water tanks have been considered, while keeping the requirement of 1.2-m water-equivalent shielding around the CD in all directions. The overall dimension of the TAO water tanks are limited by 5.1 m×5.1 m×3.8 m (high) space in the experimental hall. The octagon geometry is chosen as the baseline design for TAO, as shown in Figure 5-1(b). With a thickness of 1.2 m, the mass of water is estimated to be ∼ 80 ton. The water tank will be instrumented with an array of 3-inch PMTs. Its inner surface will be covered by a reflective Tyvek film. Similar technology has been used for the water pool in the Daya Bay experiment [62] as shown in Figure 5-3, and will be used for the JUNO experiment. Inwardfacing PMTs will be mounted on stainless steel frames placed on the sides and at the bottom of the tank. The frame will support PMTs before the water tank filling and hold PMTs in place afterwards to compensate the buoyant force in water. The PMTs will be approximately evenly distributed in the tank, forming a rectangular grid with a density of around 1 PMT per 0.5 m 2 . This corresponds to ∼ 230 PMTs and a 0.8% surface coverage. A total number of 256 PMTs might be used for redundancy. The 3-inch PMTs and their bases, potting, and electronics readout will use the same technology of the JUNO small PMT system [1]. (a) PMT assembly (b) Tyvek as a reflector The ultra-pure water in the water tanks can be a corrosive agent. The tank material as well as the material of PMT support structure and PMTs themselves will be selected to prevent long term corrosiveness, which can cause a failure of the structural integrity or solution of impurities in the water. Furthermore, the constant clarity of the water will be maintained by a water purification system, to keep the attenuation length for the Cherenkov light significantly larger than the dimensions of the tank. For micron-sized particles, this translates to a particle number density of < 10 10 /m 3 according to the JUNO R&D. A simple purification system with a filter stage followed by reverse osmosis will be enough to meet our specifications. The suspended particles in water could be filtered down to the size of 1 nm in diameter. Eventual radioactivity in water can be efficiently reduced to a satisfactory level. Bacterial growth in the water is also a concern for the water clarity. An ultraviolet sterilization stage or a micro-bubble stage will be integrated in the purification system. We anticipate a circulation of 50 L/min in the water system, which will allow one complete turn-around of the water in about one day. Ongoing R&D using JUNO prototype water tanks will help to validate the design of the purification system. The performance of the water tank Cherenkov detector, such as efficiency, position resolution, timing resolution, etc. are being further optimized with Monte Carlo simulation. The PMTs need to have a ∼2 ns time resolution in order to properly determine the veto window time. Their gain stability and the timing will be monitored with LED diffuser balls mounted at several locations within the water tank. Muons will be tagged when a minimum number of PMTs exceeds a certain threshold. For the baseline design of the water tank, we can achieve ≥ 90% detection efficiency with a threshold of 10 hit PMTs, corresponding to the light produced by a muon of 20 cm track length in the water tank. The false rate of muons due to the random coincidence of the PMT dark noise is calculated too. Figure 5-5 shows the distribution of dark noise random coincidence in a 200 ns window assuming a conservative dark noise rate of 2 kHz per PMT. A threshold of > 6 hit PMTs corresponds to a < 1% dead time due to false muons from random coincidences. In summary, a coincidence threshold of 6-10 hit PMTs will identify muons with ≥ 90% efficiency while having < 1% dead time due to random dark noise coincidences. Top and bottom shielding and top muon detector The available height of the Taishan Neutrino Laboratory for the detectors is about 3.85 m due to the steel beam frames on the roof, although the height of the laboratory is about 5 m. Since the CD has a height of 2.6 m with thermal insulation layer, it is impossible to shield it with 1.2 m water both on the top and at the bottom. Simulation shows that fast neutrons coming from the side and the bottom of the CD are 50% and 2% of that coming from the top, respectively, as listed in Tab. 5-2. Therefore, more low-density hydrogen-rich material should be put on the top to slow down the fast neutron, while the bottom shield can be replaced with high-density high-Z material just to shield the ambient radioactivity. A 10-cm layer of lead bricks will be placed at the bottom of the CD, while about 1-m of High Density Polyethylene (HDPE) will be placed on the top. Since very few fast neutrons come from the bottom, the shielding to γs and neutrons are good in all directions. It might be possible to put more HDPE above the steel beam frame to further increase the top shielding, which will be investigated later. A HDPE "hat" will be made for the Automated Calibration Unit (ACU) to easily implement the top shielding on a support frame, as shown in Figure 5-6. The presence of the ACU will increase the fast neutron background by < 10% comparing to the full HDPE shielding case. Simulation also shows that Boron-doping in HDPE cannot apparently reduce the fast neutron background, while the single neutron events in the CD, originating from low energy neutrons, do have certain reduction. There will be few layers of an instrumented plastic scintillator above the CD to serve as a muon detector. A multilayer coincidence would reduce the false muon rate due to the coincidence of natural radioactivity events. All plastic scintillator strips will be made from extruded polystyrene with a dimension of 2 m×0.1 m×0.01 m or 2 m×0.1 m×0.02 m, co-extruded with a coating of TiO 2doped PVC. 1-inch PMTs such as Hamamatsu R6095 or Electron Tubes 9128B, running at positive high voltage, will be used to read out on each end of the scintillator. The readout electronics of the 3-inch PMT system of JUNO should to be adequate for it. The design and assembly of the plastic scintillator muon detector will be investigated in detail later. Summary and R&D The proposed design of the shielding and veto systems can reduce the accidental and fast neutron backgrounds to less than 200 events per day each, which is less than 10% of the antineutrino signals and has negligible impacts on the precision measurement of the reactor antineutrino spectrum after background subtraction. The exact water tank geometry, the top and bottom shielding design, and their fabrication and installation at the experimental site are still under consideration. More details on the PMT arrangement in the water tank, the muon efficiency, and the plastic scintillator design need to be elaborated too. Silicon Photomultiplier and Readout The TAO detector is intended for precise measurements of the reactor antineutrino energy spectrum. With a yield of 4500 photoelectrons (p.e.) per MeV, the stochastic term of the energy resolution will be 1.5%/ E[(MeV)]. In most regions of interest, the expected energy resolution will be sub-percent. Photodetectors more efficient than conventional Photomultiplier Tubes (PMTs) are required. Silicon Photomultiplier (SiPM) could have a Photon Detection Efficiency (PDE) twice higher than PMT and will be used for TAO. A SiPM is a silicon-based solid-state device constructed as an array of many small Single Photon Avalanche Diodes (SPADs) of dimensions from 10 to 100 µm. Each SPAD works in Geiger mode and is integrated with its passive quenching resistor. All SPADs are connected in parallel. The output charge of the SiPM is the sum of all the charges generated by the fired SPADs. It is proportional to the number of detected photons. Compared to PMTs, the SiPMs have a high PDE, but also have a very large thermal noise at room temperature. The TAO detector will operate at -50 • C to reduce the thermal noise by about three orders of magnitude compared to the room temperature. Requirements on the SiPM parameters The requirements on the SiPM parameters should satisfy the detector requirements to reach an energy resolution as high as possible for neutrino events, and should also be reasonable in terms of contemporary SiPM technology. The requirements for the TAO experiment are summarized in Table 6-1 and described in the following. Contemporary SiPMs could have a PDE higher than 50%. However, the PDE of the SiPM usually correlates with its dark counts provoked by thermal noise, and its correlated noise, mainly including the optical cross talk and afterpulsing. We require the PDE of the SiPM to be ≥ 50%, and evaluate the effects of the three parameters jointly to optimize the detector energy resolution. The uniformity of the break down voltage V bd is required to be ≤ 10% to avoid bias voltage tuning on a SiPM tile, as explained in Sec. 6.5. To demonstrate the effects of the Dark Counts (DC) and the Correlated Noise (CN), we express the statistical fluctuations of the detector response under simple assumptions as where N pe is the average p.e. yield for an event of energy E, and f EN is the excess noise factor, defined as To find a compromise between the dark count contribution N DC and the correlated noise contribution N CN , let us consider a realistic Dark Count Rate (DCR) of 100 Hz/mm 2 at a temperature of -50 • C. Given the total area of SiPMs ∼ 10 m 2 , the dark count rate in the TAO detector is R DC = 1. Using the model with a branching process with a generalized Poisson distribution in Ref. [63], we have f EN,CN = 1 1 − λ , (6.8) where λ is the probability of correlated noise. Using Eq. 6.8 and the value of f EN,CN in Eq. 6.7, we estimate λ ∼ 13%. We therefore require the probability of correlated noise of the SiPMs for TAO, including the cross talk and afterpulsing, to be < 10%. A toy Monte Carlo simulation has been used to evaluate the joint effects of the PDE, dark count rate, and correlated noise. For given dark count rate and probability of correlated noise, the required PDE is shown as contour lines in Figure 6-1 by requiring the reference energy resolution of σ E /E = 1.7% at 1 MeV from pure statistics. For example, when the dark count rate is 100 Hz/mm 2 and the probability of correlated noise is 10%, the required PDE to reach the reference energy resolution is 50%, consistent with the analytic calculation above. Requirements A SiPM tile is used to support SiPMs and provide the connections to the readout electronics. It holds multiple SiPMs and is the basic unit during detector installation. The SiPM coverage within a single SiPM tile should be larger than 94% to ensure sufficient overall PDE. The materials of the tiles must have low radioactivity since they are close to the GdLS. The radioactivity of the SiPM tiles, including the readout electronics, should be less than 4.4 Bq/kg, 6.3 Bq/kg and 1 Bq/kg for uranium, thorium and potassium, respectively. Moreover, all materials must be compatible with the buffer liquid (LAB as the baseline design). The standard FR4 Printed Circuit Board (PCB) cannot be used to fabricate the tiles due to its high radioactivity. Some low-background materials suitable for tile fabrication are under investigation, like Pyralux, Cuflon, and others. A similar material will be used for the front-end electronics PCB. Tile design There are two factors related to the tile design that affect the overall PDE. The first factor is the SiPM coverage within an individual tile. It depends on the bonding of the SiPMs to the tile. If SiPMs with Through Silicon Vias (TSVs) are available, a coverage close to 100% can be expected. However, if wire bonding has to be used to connect the front sides of SiPMs to the tile, then the coverage will be reduced to leave space for bonding pads on the tile. After investigations into the existing technologies used by SiPM manufactures, we choose wire bonding as the baseline considering the technology maturity and the cost. A tile with an area of about 25 cm 2 consists of an 8 × 8 array of SiPMs with dimensions of 6 × 6 mm 2 , or a 6 × 4 array of SiPMs with dimensions of 10 × 10 mm 2 . Large SiPMs have certain advantages. They provide a slightly higher coverage, but the production yield might be significantly reduced, which results in a higher cost. With 6 × 6 mm 2 SiPMs, if we leave a 200 µm gap for bonding pads and a 100 µm gap between SiPMs, a coverage of about 95% can be achieved. More than 97% coverage can be achieved with 10 × 10 mm 2 SiPMs. The second factor is the gap between SiPM tiles. After investigation of various arrangements of SiPM tiles on the supporting copper shell, we found a maximum coverage of about 95.5% with tiles of rectangle shape and of dimensions of 5 × 5 cm 2 . Regions at the poles of the spherical copper shell have smaller tile coverage. If we use some irregular shape tiles, such as trapezoid, the coverage can be improved, and more important, the uniformity of the coverage improves. More investigations are needed to make the final decision. To reduce the number of readout channels, it is essential to group the SiPMs either within the SiPM tile or on the front-end electronics board. For both cases, the connections to the SiPMs are routed to the back of the tiles. In the former case, the connectors need less pins. Tile packaging The SiPMs will be packaged on the tiles for protection and easy handling. The window material could be epoxy resin if LAB will be used as the buffer. If a liquid different from LAB will be used, silicone could be considered as the window material. A good match of refractive indices between the window material and the buffer liquid is required to reduce the light reflection on the surface. R&D efforts are needed to choose the final window material. Mass characterization of SiPM Careful characterization of all SiPMs is needed to control the quality of SiPMs used in the TAO detector. SiPMs should be tested at two levels, the wafer level and the tile level. On the wafer level, we can obtain the breakdown voltage V bd of each SiPM by measuring its Current-Voltage (IV) curve and therefore probe the variation of this parameter. A good uniformity of V bd will avoid sorting of SiPMs during assembly according to their V bd . On the tile level, we will test every tile to ensure that the assembly was performed well and the tile has the desired performance. Characterization on wafer level Modern semiconductor technologies are well developed, allowing to produce SiPMs with high yield and consistent parameters. The variation of their characteristics within the wafer should be rather small. Testing a few dies on the wafer at different positions may allow us to assess the entire wafer quality. However, since the SiPM yield on the wafer is not 100%, we still need to characterize each SiPM to remove bad ones. A strong correlation of parameters with the bias voltage can be maintained within the same charge. A simple and robust way to characterize SiPMs is to measure their IV curves with an automatic probe station. The method will be applied for characterization of individual SiPMs at the wafer level. The selection of SiPMs with similar breakdown voltage would allow to bias the SiPMs from a single voltage supply. This can help to reduce the number of voltage channels and make the SiPM bias robust and cheap. We present the IV-curve measurements of 16 SiPMs from Ref. [64] as an illustration. Measuring the IV curve in forward biasing ( Figure 6-2 b)) allows to probe the average quenching resistance, while the reverse IV curve ( Figure 6-2 a)) reveals the breakdown voltage and hence shows the operating voltage range. To find the breakdown voltage one can use the Inverse Logarithmic Derivative (ILD) [65]: Finding the minimum of f (V ) returns the breakdown voltage V bd . Another robust approach is to apply a quadratic fit to the reverse IV curve [66]. Before dicing, the IV curves of each SiPM will be measured on the wafer level. This work will be conducted either by the SiPM manufacturer(s), or by the TAO team if the wafer dicing and packaging will be done by the team. For the former case, testing data will be provided by the manufacturer to help the tile-level test. Characterization on tile level In total, we will test about 4100 SiPM tiles. The first step is a visual inspection of the window (epoxy) quality for dust and bubbles. The second step is the simultaneous test of 16 tiles which are temporarily mounted on a large testing PCB. Each tile is supplied from an individual voltage source that allows to precisely control the voltage of each tile. In a dark room, we scan each SiPM on every tile with self-stabilized LEDs [67]. Each LED is calibrated by means of a reference SiPM sitting next to each tile as shown in Figure 6-3 (a). The LEDs are placed above the tiles and provide pulsed illuminations on an area of 6 × 6 mm 2 . The testing PCB is moved by two stepmotors positioning the LED beam precisely with respect to each SiPM as shown in Figure 6-3 (b). To test all SiPMs on 16 tiles we have to provide 64 scan points for each SiPM tile. Each scan point requires an acquisition of ∼ 10 4 events. A full scan of 16 tiles will take about 10-20 minutes. To test all SiPM tiles for TAO we need less than one month. This scan allows to characterize all SiPMs in terms of PDE, gain, cross-talk, afterpulsing, and resolution of Single Photoelectron (SPE). As cross reference to the breakdown voltage extracted from the wafer test, it could be obtained from the charge measurement of the SiPM pulses with the pulsed LED light source. The PDE is proportional to the mean number of p.e. detected by the SiPM, assuming the number of detected photons follows a Poisson distribution in each LED flash. The mean number of p.e. can be estimated by counting the number of pedestal events [68]. This method is less sensitive to the full SiPM response model, which could be quite complex [69], including the cross-talk and afterpulsing [70]. By integrating the waveforms from the SiPM recorded by a flash Analog-to-Digital Converter (ADC), its charge spectrum can be obtained, as shown in Figure 6-4 (a). We acquire N events for signal (LED is ON), and D events for the dark spectra (LED is OFF). Using the number of events in the pedestal for the signal spectra N 0 (the blue area under the pedestal peak in Figure 6-4 (a)) and that for the dark spectra D 0 in the same range, the estimation of the average number of p.e.μ can be found as [71] wherep ξ 0 andp λ 0 are estimators of the pedestal probability in signal (ξ) and dark (λ) spectra, respectively. The statistical error could be calculated as [71] σμ Representation of Eq. 6.11 is show in Figure 6-4 (b). The idea is to adjust the light intensity to reach the best accuracy with a limited number of acquisitions. In the case of a SiPM with a noise level of ∼ MHz (at room temperature) and gate window of 100 -500 ns (λ = 0.1 − 0.5), we have to use light intensity in a range of 1.5 -2.4 photoelectrons to reach a calibration accuracy ∼ 2% with 10 4 acquisitions. The other parameters can also be easily extracted from the SiPM's charge spectrum, like the gain, cross-talk, afterpulsing, and SPE. parameters: blue area -pedestal events, G pix -pixel gain, P 1 , σ 1 -single pixel's response and its standard deviation, Q 0 , σ 0 -pedestal position and its standard deviation; (b) Relative dispersion σ µ /µ of µ estimation by the pedestal method for signal+noise and noise spectra as function of µ for N = 10000 triggers for two different noise levels λ = 1.0 (magenta) and λ = 0.1 (blue). The points correspond to 1000 experiments. The curves correspond to Eq. 6.11. R&D efforts of SiPM characterization To study the PDE at different temperatures, we use a Dewar vessel partially filled with Liquid Nitrogen (LN). Nitrogen vapor produces a gradient of temperature at different heights. We can use this gradient for measurements at different temperatures in a broad range, from room temperature down to LN environment. We use a stabilized LED. At the beginning, we observed a light variation through a fiber at the level of about 10%, when temperature changed from room temperature to about -50 • C. It might be driven by changes in the optical properties of the fiber. To cancel such a behavior we decided to stabilize the temperature along the length of the fiber. The cryostat vessel of 30 liters is filled to about 1/3 with LN (see Figure 6-5 (a)). An assembly of the light delivery system (see Figure 6-5 (b)) is moving with the SiPM along the cryostats depth. The light delivery system is enclosed in an insulated copper pipe for screening. A bundle of optical fiber is placed inside the insulated pipe. We send light through the central fiber. The other fibers are used as a monitoring system to check the light stability when the temperature changes. The pipe is wound by a heating cable with feedback provided by a thermal sensor inside the pipe. We place a mirror in front of the fiber bundle and check the stability with high light intensity, high enough to use a silicon PIN-photodiode to monitor the reflected light. Our measurements show that the stability is at a level of 1%. Then, we replace the mirror with a SiPM (S13360-6025CS from Hamamatsu) and couple it with thermal grease to an aluminum carrier (bed) with an embedded thermosensor on its backside. Temperature stability is guaranteed with a precision of better than 1 • C. We study the PDE (see Figure 6-6 (a)) and the DCR (see Figure 6-6 (b)) at two different temperatures, 23 • C (room temperature) and -52 • C (around the TAO working temperature). The breakdown voltages are V bd (23 • C) ≈ 53.1 V at room temperature and V bd (−52 • C) ≈ 48.5 V at low temperature. Lacking an absolute calibration of the PDE, we normalize our measured PDE to 25% at the operating voltage V op = 57.9 V at room temperature as specified by the vendor [72]. As one can see from Figure 6-6 (a), at low temperature the PDE may reach a maximum value similar to that at the room temperature, which is ∼ 20% higher than that at the reference operating voltage. A maximum PDE of about 30% is reached at -52 • C at a bias voltage V db = 60.0 V, and a maximum PDE of about 29% at 23 • C is reached at a bias voltage V bd = 61.0 V. To study the dependence of the PDE on the temperature, we plot the PDE as a function of the overvoltage O(V ) = V − V br (see Figure 6-7). One can see that there is no any significant difference between the two curves for the room and low temperatures. The noise level of the SiPM is shown in Figure 6-6 (b). The DCR is less than 5 kHz (∼ 140 Hz/mm 2 ) for the maximum PDE at -52 • C, slightly higher than our specification. Some of the non-detected light might be reflected from the SiPM. There is a good chance to detect this light by other SiPMs due to the high surface coverage. Therefore, it is important to understand the reflectance of the SiPMs. The active area of a SiPM has a mirror-like surface and primarily produces specular reflections. Some diffuse reflection is also expected because of the microstructures on the SiPM surface, such as the quenching resistors, trenches and traces used to connect the SPADs [73]. We have developed a dedicated experimental setup to measure the reflectance of SiPMs in air and in LAB at visible wavelengths. Two SiPMs have been measured with this setup. One is from FBK, model NUV-HD-lowCT [74]. It has pixels of 40 µm. The other one is manufactured by Hamamatsu, model number S14160-6050HS [75], with a pixel size of 50 µm. The dimensions of the two SiPMs are both 6 × 6 mm 2 . They are packaged with epoxy resin and silicone resin as protective layers for the FBK and Hamamatsu SiPM, respectively. Our results show that the reflectance of the FBK SiPM in air varies in the range of 14% to 23%, depending on the wavelength and the angle of incidence, which is twice larger than that of the Hamamatsu device. This indicates that the two manufacturers are using different anti-reflective coating on the SiPM's surface. The reflectance is reduced by about 10% when the SiPMs are immersed in LAB, see Ref. [76] for more details. SiPM power supply There are two possibilities to bias the SiPMs with reverse voltage. They are shown schematically in Figure 6-8. All values of the component are used just for demonstration. The first possibility uses unipolar power and applies the voltage from one side only. It is shown in Figure 6-8 (left). The second possibility applies a bipolar voltage from both sides, as shown in Figure 6-8 (right). The first approach has a great advantage: The readout circuit is Direct-Current (DC) coupled to the SiPM. The second one needs Alternative-Current (AC) coupling, which brings additional nuisances for high loads and long pulses, and might be problematic at high rates. Following the first approach, we have tested a candidate system from the company HVSys [77]. The architecture aims at low cost along with sufficiency of their characteristics, high stability of the output power voltage, and low level of fluctuation (ripples). It provides voltage and current remote monitoring and control of all channels. The system consists of a system module, shown in . From the other side we supply 4100 tiles with a single bias voltage. Each DAC is controlled by a micro-PC through a Serial Peripheral Interface (SPI). Using this scheme we are able to control the common current, but cannot monitor each tile individually. The main advantage of this scheme is its low cost ≈ (5 − 10)$/channel. By selecting SiPMs with similar operating voltage and biasing them from a common source, the cost could be further reduced. Requirements The total number of SiPMs is about 2.7 × 10 5 , assuming dimensions of 6 × 6 mm 2 for a single SiPM. Simulations show that about 4500 photoelectrons will be collected by the SiPM tiles with 1 MeV of energy deposited in the GdLS. We are interested in IBD events, with energies from 1 MeV to 10 MeV. Most SiPMs will be empty and only a few photons will be captured by each fired SiPM, depending on the position of the event. The number may go up to hundreds of photons for events with large energy deposits, such as muon showers or muon bundles. The electronics system is required to precisely measure the event energy in the IBD energy region, while it is not essential to measure high energy events with good energy precision. The noise level is a challenge due to the high capacitance of the large-area SiPMs, especially with the restrictions from the very high number of channels. To reduce the number of channels, we would like to combine multiple SiPMs in one readout channel. With this method, the SiPMs connected in one readout channel are required to have good uniformity of the breakdown voltages (better than 10% as listed in Table 6-1), in order to reduce the gain variations and improve the charge resolution. But the combination of SiPMs needs a more careful investigation. A good timing resolution of electronics is also preferable, driven by the advantage of detecting Cerenkov photons. The degradation on the neutrino energy resolution due to the kinetic energy spread of the neutron recoil can be significantly suppressed with information on the positron direction, which can only be reconstructed by Cerenkov photons. In addition, Cerenkov photons might help to improve the ability of particle discrimination and suppress radioactive backgrounds. The operation of the electronics at -50 • C puts additional constraints on the design, especially the power consumption. The light yield of the GdLS depends on the temperature. A good temperature uniformity is necessary to reduce the light yield variation and improve the energy resolution. The number of cables from the front-end electronics in the cryostat to the back-end of the detector at room temperature needs to be minimized for practical reasons. The radioactivity of the materials of the electronics is also a concern since they are close to the detector target. The requirements a) Custom made HV unit by JINR b) DAC81416EVM by TI are summarized below: 1. Noise: The equivalent noise charge contributed by the electronics should be less than 0.1 p.e. At this level, its contribution to the energy resolution becomes negligible compared to other factors. Resolution: The resolution is required to be better than 15%. This results in a degradation of the energy resolution of less than 0.5%. 3. Timing: Since the fast time constant of the GdLS is about 1∼3 ns, the timing resolution of the electronics is required to be better than 1 ns for the purpose of Cerenkov photon detection. Dynamic range: It is related to the number of SiPM devices combined in one readout channel. Dedicated simulations show that the number of photoelectrons in SiPMs per square centimeter is in a range from 0 to 15 p.e. for IBD events uniformly distributed in the fiducial volume (a 25 cm standoff cut). So, if we take one SiPM tile (8×8 SiPMs) as one channel, the dynamic range is from 1 to 375 p.e. If the SiPM area is 1 cm 2 in one readout channel, then the dynamic range is reduced to 15 p.e. 5. Power: Power is distributed in the cryostat by the Front-End Electronics (FEE) close to the tiles. The Front-End Controller (FEC) boards might be placed at the top region in the cryostat (i.e. the Stainless Steel Tank) or outside of the tank at room temperature. We choose the latter case as our baseline option. The power consumption of FEE and FEC (for the case in the cryostat) is required to be less than 1 kW each. Then temperature variations of less than ±0.5 • C in the GdLS can be guaranteed. 6. Radio-purity: The requirements for the readout boards and tiles are less than 4.4 Bq/kg, 6.3 Bq/kg and 1 Bq/kg for uranium, thorium and potassium, respectively. It is dominated by the contribution from the readout boards. There are two readout options considered for the ∼ 4100 SiPM tiles. One is based on an ASIC and the other is based on commercially available discrete components. The ASIC-based readout option allows us to have a high readout granularity at the level of 1 cm 2 per channel. The discrete component readout option connects all SiPMs in one tile to a single channel to save cost and space. The two readout options will be discussed separately in the following sections. ASIC readout option Several ASICs designed for SiPM readout are available. Only a few of them are suitable for TAO, which requires single photon detection, 1 ns level time resolution, high signal-to-noise ratio, and low power consumption. Thanks to the high integration, the SiPM area of each readout channel can be reduced to the level of ∼ 1 cm 2 . Preliminary simulations show that the high readout granularity can yield more powerful pulse shape discrimination and Cerenkov light detection, because more information is available. A small readout area also leads to a small input capacitance for the electronics, so that a much simpler passive SiPM grouping method can be used, such as parallel connections or series connections. The KLauS ASIC [80], developed by Heidelberg University, is a candidate chip for TAO. It has 36 input channels and is designed for an Analog Hadron CALorimeter (AHCAL) in the CALICE collaboration [81]. KLauS chip The KLauS chip is a 36-channel ASIC fabricated in the UMC 180 nm CMOS technology. A block level schematic diagram of one KLauS channel is shown in Figure 6-11. It is comprised of a front-end shaping the input signal with two different gains and circuitry which detects pulses. Selected pulses are routed to an ADC which digitizes the analog information. After detection, a digital control circuit initiates and controls the analog-to-digital conversion and passes the digitized data to the following digital part, which is responsible for combining, buffering, and sending the data to the Data Acquisition (DAQ). Currently the fifth version of the KLauS chip (KLauS5) is available for testing, and the next version is under fabrication with a TDC with 200 ps steps. The main features of KLauS5 are the following. [82]. Architecture If we connect two SiPMs into one channel of the KLauS, one chip is sufficient to read a whole tile of 8 × 8 SiPMs. The two SiPMs can be ganged with either parallel or series connections. The architecture of the readout is shown in Figure 6-12. One ASIC board will be connected to one SiPM tile by connectors or short flat cables. If larger tiles are used eventually, several chips will be deployed on one ASIC board. The configuration, data transfer and power supply of the KLauS chips will be managed by FPGAs, which will be arranged on separated boards. Let's assume each FPGA board can handle 36 KLauS chips, then there will be 114 FPGA boards in total. The Serial Peripheral Interface (SPI) is used to configure KLauS and provide the clock. A total of five connections is needed to each chip. The data collected by KLauS will be stored in its local FIFO. There are 3 L1 FIFO and 1 L2 FIFO in each chip. The L1 FIFO can store 64 events and the L2 FIFO can store 512 events. If the L1 and L2 FIFOs are full, the analog-to-digital conversion will be inhibited until data is read out and extra space in the FIFO becomes available. A pair of differential cables with a nominal bandwidth of 160 Mbit/s is used to transmit the data from the FIFOs to the FPGA boards via Low Voltage Differential Signaling (LVDS). The maximum data rate is expected at 10 Mbit/s for a sum of 32 channels in one chip, which is dominated by the dark noise of the SiPMs. The FEC will also distribute the power to different ASIC boards, including the bias voltage required by the SiPMs. The FPGA boards will either be located at the top of the central detector flushed with nitrogen gas, or located outside of the Stainless Steel Tank. We can tolerate more power consumption there. The lengths of cables between the ASIC boards and the FPGA boards can vary from tens of centimeters to about 10 meters, depending on the location of the FPGA boards. One optical link from each FPGA board is used to transfer data to the DAQ hardware which is located outside of the Stainless Steel Tank. The same link will be used to send trigger signals and clock back to the FPGAs. The white rabbit system, developed at CERN [83], will be used to synchronize the clocks among different FPGAs. The connection to the white rabbit system will share the optical link with data transfer. There are in total 114 optical links from the FPGA boards to the DAQ system, together with several cables for power supply. R&D with the KLauS chip Testing boards for the KLauS5 were built, shown in Figure 6-13. On the right, the ASIC board holds the chip. It provides interfaces for analog monitor and debugging, and connections to the SiPMs via two connectors with 72 pins in total. The left board is called interface board. It provides power and a slow control interface to the ASIC board. A Raspberry Pi (Model 3B), located under the interface board, is used to control and configure the ASIC. The Raspberry Pi is connected to a local PC via local area network for remote configuration and data taking. The DAQ software is provided by the ASIC developers from Heidelberg University. Some basic functional tests have been performed by connecting a 3 × 3 mm 2 SiPM (manufactured by Hamamatsu, model number S13360-3025CS) to the ASIC board. The testing boards are placed in a cryogenic box with controllable temperature from room temperature to -70 • C. A LED is positioned above the ASIC board. It is driven by a pulse generator and provides pulsed light to the SiPM. Analog pulses were monitored after the shaper circuit during the cooling process. No significant changes of the waveform were observed between room temperature and -50 • C. A snapshot of typical waveforms is shown in Figure 6-14. A clear signal of single p.e. is observed. It is well separated from the baseline. The digitized output charge is shown in Figure 6-15. It is measured at -50 • C. The left spectrum in Figure 6-15 is measured in the dark with an over voltage of about 3 V. The peak of single p.e. can be clearly observed, together with a fraction of two p.e. events caused by the optical cross-talk and a small peak of pedestal that can be eliminated by slightly increasing the trigger threshold in the chip. The right plot of Figure 6-15 shows the charge spectrum measured with pulsed light on the SiPM with an over voltage of about 2 V, in which the single p.e. signal is rejected by a relatively high trigger threshold. A good separation can be observed even for multiple p.e. The analog-to-digital conversion time of the KLauS5 chip is studied at room temperature by directly injecting two charges into the chip with different magnitudes. The time interval between the two injected charges is adjustable through the delay time of the second charge in a pulse generator. During the processing of the first charge, the second charge cannot be detected. This feature is well demonstrated in Figure 6-16, which shows the fraction of the second charge detected as a function of time between the two injected charges. We conclude that the KLauS5 chip fully recovers within 700 ns, which meets the requirements for IBD detection in the TAO detector. In general, the preliminary tests show that the KLauS5 chip can work well at a temperature of -50 • C, with good performance of single photon detection and an acceptable dead time. More detailed studies are essential and will be conducted in the near future. Discrete component option A readout system built from commercial off-the-shelf components, referred as the discrete component option here, can be adopted for TAO without a dedicated ASIC. This approach offers high flexibility on the selection of the components. The proposed solution consists of: • Analog Front End Board (FEB) that amplifies and shapes the SiPM signals; • Front End Controller (FEC), based on an FPGA, that continuously samples and digitizes the signals coming from the FEBs. It pre-processes and formats the data. This is needed to control and eventually reduce the output bandwidth for transmitting the data to the DAQ; • a Central Unit (CU), a digital board based on a high-end FPGA that has the role of a global supervisor. The FEB and the FEC are mechanically and logically separated in order to maximize flexibility and performance. Architecture The architecture of the discrete component option, shown in Figure 6-17, is based on a FEB capable reading a large-area tile (5×5 cm 2 ) of SiPMs. A SiPM has an output capacitance proportional to its area, so the capacitive noise must be controlled in this approach. On the same tile, the SiPMs are ganged together with series and parallel connections, an example of this passive ganging is shown in Figure 6-18. Series connection reduces the equivalent capacitance by a factor of 2. Each FEB reads one tile through a board-to-board connector. A first amplifier stage converts the SiPM output charge to a voltage pulse that is further amplified by a second stage. An analog signal from the whole tile is presented on the FEB output connector. The amplified output is sent to the FEC that collects data from 32 FEBs. 128 FECs are needed for ∼ 4100 tiles. All the FPGA boards will act as white rabbit nodes to ensure sub-nanosecond synchronization. On the FEC, each input from the FEB is digitized by a dedicated ADC. A fast comparator ensures a good time resolution independent of the ADC's sampling rate. Digitized data collected by the FPGA is sent outside the cryostat via an optical fiber link. The CU devoted to manage the FEBs is located outside the cryostat. It collects data from the 128 optical links. It is connected to the white rabbit network. Moreover, the CU performs a first-level triggering and event building, and manages the data transfer. Front End Board (FEB) The basic structure of each FEB consists of a number of Trans-Impedance Amplifier (TIA) nodes, up to 6, in order to read out of a large area SiPM tile (∼ 5 × 5 cm 2 or more). The baseline is with 4 TIAs. Each one manages 6 cm 2 of SiPMs. The tile will be mechanically restrained to the FEB with a solid fixture like Würth REDCUBE SMT series. Figure 6-19 shows the drawing of the FEB and the connector side of a tile. This solution allows a very simple installation of FEBs. A single connector simplifies the design of the supporting copper shell and gives more space for the components. On the opposite side of the PCB a four-pin connector provides power to the FEB and the SiPM tile. All the amplifiers use the same +V/-V supply. On the same side the analog output is routed to the FEC through an SMA connector. The TIAs share the capacitance of the SiPM tile. A summation amplifier adds the output of the TIAs into an analog sum. This second stage can be used to enhance the total gain to ensure the best compromise between resolution of single p.e. and linearity for high p.e. Figure 6-18 shows the connection of the SiPMs to the FEB and Figure 6-20 sketches the processing of the signal. More TIA nodes can be added on a FEB to further lower the input capacitance, especially, if we decide for larger tiles. The major advantage is the input signal bandwidth. This value is directly affected by the input capacitance. It will increase if more SiPMs are connected in parallel. SiGe technology is a good option for the FEB operational amplifers due to the -50 • C detector working temperature and their high bandwidth. A resolution below 20% on single photoelectron has already been achieved with this approach, while a 2 Vpp (Peak-to-Peak Voltage) output dynamic range should ensure linearity in 1-50 p.e. range. A power consumption of 150 mW is expected for each FEB. The board will be directly connected with the supporting copper shell, simplifying the heat removal. A preliminary version of the FEB board has been built and extensively tested. Following the results of these tests a new version has been produced. It is shown in Figure 6-21. In the same figure, linearity obtained with a current source is shown. A measured charge distribution from dark counts from an 1 cm 2 SiPM is shown in Figure 6-22. It demonstrates the single photon counting capability and resolution of the proposed electronic. Front End Controller (FEC) The FEC controls the FEBs through an FPGA. Figure 6-23 shows a block diagram of the FEC. Cable assemblies like the EQRF series from SAMTEC can be used to connect the FEB's SMA connectors to the FEC. Cable lengths from 15 cm to 100 cm are possible. The first option is to place each FEC near its FEBs. The second option might be to place all the FECs on the top of the cryostat to simplify the thermal management. A precise evaluation of the effect of different cable lengths is necessary. A third option is to put all the FEC boards outside the cryostat. Highly efficient DC/DC converters will be placed on the board to supply the FPGA and ADCs. We aim at least at 90% efficiency to fulfill the power constraint. DC/DC converters for the FEBs can be located on the FEC to avoid other supply distribution boards inside the cryostat. Each FEC collects the output of 32 FEBs. Each channel is digitized with a 40 − 80 MSample/s ADC. Each SiPM pulse is shaped to ∼ 200 ns. About 10 − 20 samples are recorded for each pulse. Due to the integer nature of the SiPM signal (the output signal represents the sum of n SPAD signals), 10 samples are sufficient to obtain a good estimate of the number of p.e. The integral of each pulse can be computed inside FPGA, then a p.e. number can be associated to the obtained value. A sliding window integration can reduce the noise effect. Each channel produces a single byte of data to cover the range of 0-255 p.e. Zero suppression can be applied to reduce the bandwidth. The bandwidth becomes independent of the ADC's sampling rate, if we transfer only the computed p.e. number. Most of the ADCs on the market have LVDS (Low Voltage Differential Signal) output. To manage 32 ADCs (32 differential pairs plus differential clock and control signals), an FPGA with 100 I/O ports is sufficient. The white rabbit core for synchronization can be installed on most of the seventh generation Xilinx FPGAs and on some Ultrascale models, too. The control and synchronization of ADCs can be realized on the same FPGA. The same link can be used for synchronization and data transfer. This approach can be useful to reduce power consumption of the FEC. Multi-channel ADCs like the AD9249 consumes 58 mW per channel at a sampling rate of 65 MSample/s with 14 bit resolution. Then 2 W are needed for the 32 ADC channels. Each input link will be monitored by a fast comparator with an adjustable threshold near 0.5 p.e., the output of each comparator is directly connected to the FPGA. A timestamp will be generated each time Figure 6-21: Left picture shows a FEB prototype with two SMA connectors, one for the summation output, another connected to one of the TIA's output for debugging purpose. Right plot shows the TIA output linearity at different temperatures. The x-axis is the injected charge and the y-axis is the output voltage. a comparator output fires. The generated timestamp will be associated to the corresponding data from the ADCs. Another approach is to obtain a timestamp better than the sampling period with digital filters. Both approaches, fast comparator and digital filters, can be evaluated to ensure few nano-second resolution. With a Giga-sample flash-ADC, the need for a fast comparator can be avoided. The output bandwidth is independent of the sampling rate, but the power consumption would increase. A preliminary version of the FEC with 2 ADC channels has been tested. It is shown in Figure 6-24. Figure 6-25 shows the measured charge spectrum from dark counts from a 1-cm 2 SiPM obtained with a FEB board connected to the FEC. The charge integral is computed inside the FPGA, thus demonstrating the single photon counting capability and resolution of the proposed electronics. Central Unit (CU) The CU will be placed outside of the cryostat with less constraints on power budget and the temperature range of the components. A high-end FPGA like the Xilinx Ultrascale+ can manage 7 Trigger, DAQ and DCS Trigger requirements and conceptual design The TAO trigger will have to cope with the signals of ∼ 4100 SiPM tiles of the central detector. The average number of photoelectrons detected by each SiPM tile ranges from one for low energy events up to hundreds for high energy events with vertices located very close to the detector border. The design goals of the trigger system are summarized below: • The efficiency for IBD events produced by reactorν e should be close to one for an energy deposit greater than 1 MeV. • The trigger system should be able to suppress the detector-related background and reduce the random coincidences due to the SiPM dark counts. A trigger system based on a distributed FPGA architecture can be an optimal and scalable solution for either the discrete component or ASIC readout options. Discrete component readout option In the discrete component readout option, described in Sec. 6.5.3, the Front-End Controller (FEC) board will perform a first level data management including setting of the thresholds, zero suppression and a first level (L1) trigger based on local majority and event topology. A second level (L2) trigger system will be necessary to select correlated SiPM hits produced by physical events from a sea of random dark noise hits. The L2 trigger algorithms will be implemented in the Central Unit (CU). A simple majority logic based on the coincidence of different channels in a certain time window will suppress the dark count rate by several orders of magnitude. At -50 • C the Dark Count Rate (DCR) of a single 5 × 5 cm 2 tile is ∼ 250 kHz. As shown in Figure 7-1, considering a coincidence time window of 100 ns, the global dark count rate will drop well below 1 kHz by requiring the coincidence among ∼ 150 channels. By requiring the coincidence of ∼ 160 (170) channels within 100 ns the global DCR drops down to 10 Hz (1 Hz). Additional L2 algorithms can also be implemented, for example a high-multiplicity trigger algorithm in order to tag events with a large energy deposit close to the liquid scintillator volume border. The signature of the neutron capture by gadolinium or hydrogen nuclei can be exploited in order to tag the IBD events either offline or online, thus reducing the background event rate. A preliminary estimation of the expected trigger rates is reported in Table 7-1. ASIC readout option Similar approach can be followed in the ASIC readout option, described in Sec. 6.5.2. An FPGA board similar the FEC will be implemented in order to manage the signals coming from each ASIC chip. Each board will be able to control 36 ASICs. A first level local trigger can therefore be implemented in the FPGA board in order to search for locally correlated SiPM hits produced by physical events and therefore suppress the background due to the SiPM dark counts. Each FPGA board will be connected via a high-speed optical link to the Central Unit, where second level (L2) trigger algorithms can be implemented thus provide a global trigger condition. DAQ conceptual design scheme The main task of the Data Acquisition System (DAQ) is to record the antineutrino events. The DAQ has also to cope with different kind of events like muons and low-energy radioactive decays in order to understand the detector background. The DAQ has to record the data from the antineutrino and the muon veto detectors with precise timing and charge information and build the event from the data fragments coming from all the electronics devices. The TAO DAQ system is based on a multi-level distributed FPGA architecture. In the discrete component readout configuration, shown in Figure 7-2, the first level is dedicated to the management of the signals coming from the Front-end Electronics Boards (FEBs). Each FEC manages 32 FEBs and is equipped with a Xilinx Zinq/Kintex FPGA that allows continuous sampling of the signal coming from the FEBs and performs the estimation of the number of photoelectrons detected. The trigger will be managed by the Central Unit, based on a Xilinx Ultrascale+ generation high-performance FPGA, briefly described in Sec. 6.5.3. The CU will receive the data from each FEC board via a high-speed optical link. Charge, timestamp and channel ID of each SiPM readout will be stored in a cyclical memory buffer. The FPGA in the CU will manage the data, look for coincidences, assert the trigger condition and build the event. In normal running conditions only the charge (i.e. number of photons) and time recorded by each SiPM readout will be stored. The FEC and CU firmware will also provide the possibility of saving the whole waveform which can be useful especially during the debugging, commissioning and calibration phases. Similar architecture can also be exploited in the ASIC option, as shown in Figure 7-2. An FPGA similar to that used in the FEC will manage the signals coming from 36 ASIC chips and deliver them to the CU that will manage the trigger and the event building. Even charge, timestamp and ID of each channel will be recorded in this approach, the ASIC does not allow to keep the whole waveform. All the events satisfying the trigger condition will be sent to a computing farm via optical links or fast ethernet and written to disk. Rate and data throughput estimation The DAQ system based on the multi-level FPGA architecture provides a real-time estimation of the number of photoelectrons seen by each SiPM tile and a nanosecond timing resolution. In the ideal case the total data volume should not exceed 100 Mbps, thus allowing an easy data transfer via standard Ethernet. Discrete component readout option In the discrete component readout option, the electronics plans to use 16 bits for the number of photoelectrons and 16 bits for the time for each channel. Additional 32 bits will be used for control purposes. The data size of a single SiPM signal is therefore 64 bits including the header and the SiPM readout ID. The DAQ system has to deal with ∼ 4100 channels, one for each SiPM tile. The data throughput of the TAO detector can be preliminarily evaluated for the discrete component electronics configuration. The main contributions are reported in Table 7-2. The total data production rate will be less than 100 Mbps. ASIC readout option In the ASIC readout option, the DAQ system has to deal with about 140,000 channels. The event size is 52 bit/channel, 40 bits for the charge and the timestamp and 12 bits for the chip ID. The expected data throughput is reported in Table 7-3. In this case the expected data production rate is greater than 2 Gbps. It is therefore essential to apply data reduction algorithms in order to reduce the data volume to fit in 100 Mbps. The veto data throughput has been evaluated to be only a small fraction of the total volume. Online system The online system is designed to collect the physics data from the electronics readout modules, build events, process & compress data, and finally save the data to disk. Requirements Performance The data throughput of TAO has been discussed in the Sec. 7.3. The online system should support the data transfer bandwidth with negligible dead time. Data Processing As described above, for the discrete component readout option, the input data bandwidth will be < 100 Mbps. In this case, online system can save all the data to storage space. For the ASIC readout option, the input data rate will reach about 2.3 Gbps even after the global trigger. Therefore, to reduce the data rate to less than 100 Mbps, online data compression in real time is needed. The compression algorithm should be carefully studied to make sure the compression ratio can meet the bandwidth requirement. The performance of the algorithm should be optimized taking into account the online CPU resources, reliability and stability. Other Functions Besides the basic dataflow related functions, the online system will also provide the common functions like run control, run monitoring, configuration, data quality check, information sharing, and so on. Conceptual design scheme Hardware Structure The hardware platform of the online system will be based on the advanced commercial computing servers and network equipments. The data rate produced by TAO can be managed by means of a computing infrastructure based on commercial products already available on the market thus ensuring robustness and redundancy at an affordable cost. Data will be stored in a local computing farm and subsequently transferred to IHEP data centers. A ∼ 50 TB onsite storage will fulfill the TAO requirements. The storage can be managed by one file server; an additional one is however foreseen for redundancy. Two online servers will be needed for the dataflow management and online functions such as computing, run monitoring, run control, data quality monitoring, and so on. The data transfer can be managed by using standard Ethernet and iSCSI interfaces. Multi-port 10 Gbit network switches will be used to manage the storage and server communications. Software Framework Based on the functional requirements of the online system, the software framework design can be divided into two layers: the dataflow layer and the interactive layer, as shown in Figure 7-3. The two layers cooperate with each other to achieve full function of the online system. • The dataflow layer is responsible for the data transfer from the Readout electronics, processing (event building) and storage. • The interactive layer is responsible for all the management, controls and operations during data taking. It is the structure foundation of the online system. It is used for information transmission, real-time monitoring, online display, and so on. And it also provides the interface between users and the online system. The two layers work independently and are connected through Ethernet, reducing interferences and thus improving the robustness of the system. Requirements In order to keep the TAO detector data taking stable and to guarantee the performances for a long-term operation, a standard Detector Control System (DCS) will be implemented. The main task of the DCS is to monitor and control the working conditions of the detector and to raise alarms if a specifically monitored parameter goes out of range. The DCS system is designed with different modules which are responsible for the following tasks: • Monitoring and run control control of the high voltage system and the SiPM low voltage power supply; monitoring of the electronics racks & crates for the veto and central detector. The monitored parameters include: electronics temperature, power supply, fan speed, over-current protection; the sensors include the low temperature sensors with oil-proof packaging (-70 • C to 30 • C); monitoring of the gas system, including the radon sensors, the oxygen concentration and nitrogen cover gas flow; central detector overflow tank monitoring, liquid level monitoring, video monitoring and pressure sensor monitoring from 0 to 5 meters; monitoring of the calibration system, including the monitoring of the source position and the motor control; monitoring of the experimental hall, including the temperature, humidity, pressure, video monitoring; monitoring of control room, database system and web-based remote system. database recording, historical data query and archiver viewer; logic for alarms/errors/events; interlock logic; embedded remote-control modules; interface to DAQ & trigger; replication of the detector settings and monitored parameters to offline software. Operation status of the system devices is monitored in real time and recorded in a database. Meanwhile, the devices can be protected by the safety interlock to prevent equipment damage and for personal protection. Overall goals and scheme The overall goal of the design is to build a distributed system to remotely control all the equipments of the detector running industrial or self-designed devices. Following the requirements, six different subsystems as well as the common experimental infrastructure that are controlled and monitored by the DCS are foreseen, including about 60 temperature and humidity sensors and hundreds of power supply readings. The DCS framework can be divided into three parts, as shown in Figure 7-4. The global control layer allows for general control procedures and efficient error recognition and handling. It manages the communication among subsystems such as the central detector, the ACU, and the veto, and provides a synchronization mechanism between the DAQ and trigger system. A database is used to store the parameters of the experiment, the configuration parameters of the systems and to replicate a subset for physics reconstruction. The local control layer provides tools for the management of local subsystems devices. The data acquisition layer is responsible for various hardware interfaces using the Channel Access (CA) protocol. The DCS platform will be based on The system adopts distributed development method. According to the experimental equipment distribution characteristics, the distributed data-exchange platform will be used for the development. Global control systems share the data and the interactively control commands by information sharing pool. Configuration files will use the text format specification. TAO DCS EPICS features The key technology of the development includes the remote monitoring and control for many kinds of embedded devices supported by protocols like TCP/IP, SNMP and Modbus, etc. The development method of the embedded device drivers is based on Input/Output Controller (IOC) and communicates with the up-layer software following the CA protocol. The systems can remotely monitor and control the embedded devices so that the detector can work without onsite shifter. When an exception occurs, the alarms will be raised in time and acknowledge the experts. Besides, the global layer can also manage and maintain the data in a MySQL based database. The equipment controlled by sending and receiving strings is based on a communication module named "StreamDevice". The IOC communication module of equipment, based on SNMP and Modbus, is developed in C language. The Control System Studio [85] will be the main platform for the development of the GUIs. Commercial systems or specific hardware/software systems will be assembled by the IOC module. Drivers, after digitalization of the embedded devices, will be developed to collect the records related with the front-end sensors. In order to remove device-specific knowledge from the record support, each record type can have a set of device support modules. Every record support module must provide a record-processing routine to be called by the database scanners. Record processing consists of some combinations of the following functions: • input: read inputs. Inputs can be obtained, via device support routines, from hardware, from other database records via database links, or from other IOCs via CA links. • conversion: conversion of the raw input to engineering units or engineering units to the raw output. • output: write outputs. Output can be directed, via device support routines, to hardware, to other database records via database links, or to other IOCs via CA links. • raise alarms: check for and raise alarms. • monitor: trigger monitors related to CA callbacks. • link: trigger processing of linked records. The devices supported by the embedded systems can be PLC, ARM, FPGA, with standard interfaces such as USB and serial RS232, as well as network devices based on TCP/IP or UDP. R&D plan A testbed will be arranged for each subsystem in order to test the hardware drivers and software systems. Each detector subsystem model will be tested in the testbed. So does the data for the predriver collection development. For those devices that cannot set up a testbed, simulation models should be made with the software development team to have a common definition of the interface specifications, including data format, transmission frequency, control flow, interface distribution, and physical correspondence table, etc. A software framework will be developed with a set of integrated modules at the beginning, according to the detector hardware requirements. The interface will be defined according to the discussion with the experts of each subsystem. Data collection, visual monitoring, data recording, historical query, safety interlock of remote monitoring and control, dynamic data, functional configuration of the customization will be developed during the system assembly. During the commissioning, users' feedbacks will be collected to improve the performance. After commissioning, the formal access of the global system will be implemented. Requirements Offline software plays a fundamental role in the data management and in improving the efficiency of the physics analyses. The offline software system of TAO will share several structures and packages already being used in the JUNO software. A detailed description of the software and packages shared by JUNO and TAO will be given in the next sections. The following requirements have been determined: 1. Reactor antineutrino events will be only a small fraction of the total number of events recorded by TAO. The offline software should provide an efficient and flexible data I/O mechanism and event buffering in order to allow high efficiency data storage and access. 2. An efficient and detailed simulation of reactor antineutrino events as well as of backgrounds is necessary to guide the detector design and evaluate detector efficiencies and systematic errors. 3. A detailed optical model of the Gd-loaded liquid scintillator and the response model of the Silicon Photomultipliers (SiPMs) are needed in order to investigate the detector energy resolution and reconstruction performances. The reliable energy reconstruction algorithms should be developed based on charge and time information of SiPMs. An accurate vertex reconstruction is required in order to precisely determine the fiducial volume and veto natural radioactivity events coming from outside the liquid scintillator volume. 4. Event display software is needed in order to show the event structure and the reconstruction performance. TAO is envisaged to run for at least 3 years collecting about 1.2 PB of data during this period. A consistent part of the software infrastructure developed for JUNO will be re-used in TAO. SNiPER It is straightforward to use the same software platform and framework in TAO as that used in JUNO, for the purposes of resource optimization, manpower integration and learning curve reduction. Therefore, we plan to develop TAO offline software based on the SNiPER framework [86] on platform of LINUX OS and GNU compiler. Meanwhile, CMT (Configuration Management Tool) [87] will be selected as a tool to manage packages and generate makefiles. The SNiPER framework, standing for "Software for Non-collider Physics Experiments", has been successfully used in JUNO offline software system. It was originally designed and implemented with Object-Oriented technology and bi-language, C++ and Python for JUNO experiment. SNiPER has many innovations in multi-task processing controlling, in the handling of correlated events by introducing event buffer mechanism, and has less dependencies on the third-party software and tools. It also provides the interfaces for implementation of multi-threading computing. SNiPER requires users to implement their software as modules, and these modules can be loaded and executed dynamically at run time. Editors: Guofu Cao (caogf@ihep.ac.cn) and Paolo Montini (paolo.montini@uniroma3.it) Communications between modules, such as data accessing, are managed by the interfaces provided by SNiPER. According to different functionalities, the modules in SNiPER are further distinguished as the following key components, shown in Figure 8-1. 1. The Task is used to control the event loop and manage the algorithms and services. 2. The Algorithm is implemented by users to analyze event data. It is invoked by the framework during the event loop. 3. The Service helps users to access required parameters and data globally. 4. The Property allows users to change parameters during the job configuration, avoiding code modification and re-compiling. 5. The SniperLog is implemented for logs with different output levels. 6. The Incident triggers the registered subroutines based on requirements thus making data processing more flexible. 7. The Data Buffer stores event data over a period of time, and the length of the buffer is configurable by users. JUNO's offline software system [88] is designed and developed as modules within the SNiPER framework. Most of the modules in JUNO have been implemented and validated by JUNO collaborators. TAO's offline software will be based on the JUNO offline system and the SNiPER framework, illustrated in Figure 8-2. Therefore, the same external libraries, such as Geant4 [89,90], CLHEP [91], ROOT [92], etc., can be shared by JUNO and TAO. In particular, part of JUNO offline software can be re-used by TAO thus avoiding duplicated work. With this design, JUNO offline software system will not depend on TAO offline software, so that its release process and maintenance will not be affected by TAO. Conceptual design of the TAO offline software, which will use the SNiPER framework and the existing JUNO offline software. Similar modules as those in JUNO will be implemented, such as simulation, reconstruction, calibration, analysis, etc. Event data model A good design on event data model for TAO is essential to achieve highly efficient data processing and easy data analysis. Due to much smaller data volume compared with JUNO, the event data model [93], developed for JUNO, is sufficient for TAO and thus will be used. The event data model in JUNO is implemented based on ROOT TObject, in order to benefit from ROOT's features, such as schema evolution, I/O streamers, run time type identification, inspection and so on. At each data processing stage, the event data model is designed with two levels: the event header object and event object. The event header object only stores summary information of events, such as tags, timestamps, etc., which allows users to access events in a more efficient and faster way, without loading full event data. In particular, users can perform an efficient data analysis by selecting events based on information in header object and then loading the event object containing the full information of events for a deeper data analysis. The association between the header object and the event object is implemented via SmartRef, which provides control of the loading of the full event data. Geometry management The information of detector geometry is required in different stages of offline data processing. The consistent geometry should be used between different stages to ensure correct data handling. In JUNO, we implemented a geometry management system [94] in the SNiPER framework. It is used to describe the detector properties, such as the geometrical structure, shape, dimensions, materials, positions and rotations of detector components in the coordinate system. It also provides a unified interface for applications in the offline software where geometry information is necessary, such as simulation, reconstruction, event display and data analysis thus ensuring the consistency of geometry used in different modules. In TAO offline system a similar approach of geometry management based on GDML [95] will be used. The GDML geometry will be automatically generated by an embedded G4-GDML writer in Geant4 and it will be converted to ROOT format and used by a geometry service. The geometry service will be implemented as a module in offline system to provide interfaces for applications that use geometries. Generators The Monte Carlo event generators for TAO are quite similar with that used in Daya Bay experiment, since the both experiments are using similar target (GdLS) to detect electron anti-neutrinos emitted from reactor cores. All generators that are available in Daya Bay have been implemented in JUNO offline software. Meanwhile, some new generators have also been developed to fulfill the requirements of JUNO physics motivations, such as generators of geo-neutrinos, atmospheric neutrinos, proton decays, supernovas, etc. Since TAO offline software system will use JUNO's offline software, all generators developed for JUNO can be directly used in the TAO detector simulation. In particular radioactivity generators can be used for background simulations and calibration studies and the Inverse Beta Decay (IBD) generator can be used to study the detector response. The muon generator cannot be directly used in TAO, due to different overburden between JUNO and TAO. A much higher muon flux and smaller average muon energy are expected in TAO. In current TAO simulation we use the muon flux and energy spectrum measured at the sea level and we combine it with the implementation of geometries of the TAO's basement location and its overburden. JUNO's IBD generator is sufficient for the TAO detector R&D studies, and it will be updated in future by taking into account the thermal power and fission fractions of the reactor cores at Taishan nuclear power plant. Several useful tools have been developed in JUNO to help users to generate events in specific volumes or materials at specific time. However, due to different geometries in TAO, these tools have to be re-implemented in TAO's offline software. Standalone simulation packages for detector R&D In the beginning a dedicated standalone simulation package has been implemented based on Geant4 for the purpose of detector R&D studies. It provides us a guidance to choose the best detector option that can fulfill the desired physics goals. Based on this package, we have conducted very detailed studies on energy resolution, natural radioactive background and fast neutron background, capability of the pulse shape discrimination, etc. In the simulation, the detector geometry is constructed with C++ codes and geometry tools provided by Geant4, including central detector, veto system and a simplified geometry of the basement and its overburden. Different detector design options have been implemented in the simulation and can be easily selected by users. The accuracy of the detector-geometry descriptions can meet the requirements of R&D studies at the current stage. The optical properties of materials, such as GdLS, acrylic, water, etc., are managed by XML files and serves as inputs of detector geometry constructions. For simplicity, the element of SiPM sensors is constructed based on a 5 cm × 5 cm tile as a sensitive detector, instead of constructing the single cells individually. However, the coverage of SiPMs in the tile is taken as a correction factor in the Photon Detection Efficiency (PDE) of SiPMs to ensure the correct overall light detection efficiency. Figure 8-3 shows one of the constructed central detectors in simulation, in which the SiPM tiles are shown with red line and the acrylic sphere is indicated with blue line. The optical surfaces are also well defined during geometry construction in order to simulate optical photon propagation and its boundary processes. A complete list of the physics processes has been borrowed from the simulation code of the Daya Bay experiment, which has been extensively validated. The low energy electromagnetic processes are used for electrons and gammas, which can yield a better simulation accuracy, compared with standard electromagnetic processes. A high precision model is selected for neutrons below 20 MeV. Moreover, several bugs in Geant4 have been fixed regarding multiplicity and energy spectra of gammas emitted after neutron captured on nuclei. A customized scintillation process is implemented to handle the re-emission process of optical photons and apply multiple time constants during liquid scintillator light emissions. Besides the built-in generators in Geant4, the external generators with HEPEvt interface can also be directly used in simulation, such as the IBD generator developed for Daya Bay and JUNO experiments. A position tool is developed to help users to generate events in a specific volume or material with pre-defined position distributions. The information of the simulation output is saved in plain ROOT trees. Currently tens of variables have been saved in trees and the new variables can be easily added based on users' requirements. Integration with the SNiPER framework The detector simulation is one of the components of the TAO offline software system, so it is essential to implement the simulation within the SNiPER framework thus providing consistency with other components and maximize the benefits of the framework, like much easier data transfer between different modules. A simple way to implement the simulation in SNiPER is to integrate the developed standalone simulation package with the framework. This process, however, require some changes in order to fit the standalone package into the framework. During JUNO simulation software development, we investigated several integration solutions used by other experiments, such as BES-III [96], LHC-b [97] and Daya Bay from which we worked out a final solution for JUNO experiment. The design schema is shown in Figure 8-4. It can keep the integration simple, meanwhile most of parts in the standalone package can be kept unchanged, such as the most complicated detector geometry construction. The main issue arises from the fact that in the standalone package the event loop is managed by the tool G4RunManager in Geant4 while in the SNiPER framework the event looping is managed by a custom algorithm. In order to easily integrate the simulation code in the framework we will develop a SNiPER service that will deliver the G4RunManager object to a specifically designed SNiPER algorithm that will be in charge of controlling the event loop. Meanwhile, a SNiPER interface (IDetSimFactory) will be provided to users to manage the construction of necessary object instances in simulation, such as detector geometry, physics processes, primary generators and kinds of user hooks. In the SNiPER framework, the plain ROOT trees are not suitable to store the simulation information anymore. We will use event data model to perform the data transfer between modules. For example, the generated particle information in event generators will be wrapped in the data object with a predefined format of event data model, then they are transferred to the detector simulation stage and used to accomplish simulation. The output of simulation will also be saved in data objects which will be transferred to the next stage or saved into disk. The data transfer and data I/O are automatically managed by the framework. Electronics simulation The main goal of the electronics simulation is to simulate the real response of the SiPM sensors, electronics system, trigger system and Data Acquisition (DAQ) system with a sufficient accuracy. It consists of SiPM and electronics modelling, trigger simulation and DAQ simulation. The output of the electronics simulation will keep the same data format as the experimental data to simplify the data analysis procedure. The electronics simulation will be implemented as a module in the SNiPER framework. To take the SiPM hit information as inputs (given by Geant4-based detector simulation), the response of SiPM sensors will be modelled first, such as photon detection efficiency, dark noise, optical cross talk, after pulse, etc. Then, we will model the full response of the electronics readout chain. For the discrete component readout option (see Sec. 6.5.3), signals from SiPMs will be amplified by Front-End Electronics (FEE), in which the effects of FEE will be applied, such as noise, gain, shape of waveform, etc. Then it will be followed by waveform sampling and data transfer to FPGA. In FPGA, the waveform will be integrated and converted into information of charge and time. The same integration method will be used in electronics simulation. Finally, the charge and time will be sent out to DAQ together with information of channel IDs. For the ASIC readout option, electronics simulation will simulate the response of each stage in ASIC (see Sec. 6.5.2). To take KLauS ASIC as an example, it includes input stage, integrator, trigger, shaper and ADC/TDC. Trigger simulation takes the output from the electronics as its input to simulate the trigger logic and clock system, and decides whether or not to send a trigger signal if the current event passes a trigger. Currently, we have several trigger options under discussions, and the trigger simulation will be implemented for each of them to study their performance. Background mixing In real data, most events come from background, like natural radioactivity events and cosmic muon induced events. In order to simulate the real situation, a background mixing algorithm needs to be developed to mix signal events with background events. It is an essential module if we want to make Monte Carlo data match well with real data. There are two options for background mixing: hit level mixing and readout level mixing. For hit level mixing, the hits from both signal and background are sorted by time first, and then handled by the electronics simulation. Hit level mixing is closer to the real case, but requires a lot of computing resources. In JUNO, a hit level mixing algorithm has been implemented. Readout level mixing is much easier to implement and requires less computing resources, but it cannot accurately model the overlapping between multiple hits. Since both options have advantages and disadvantages, more studies are necessary before a final option is selected as the official TAO background mixing algorithm. Reconstruction The major task of the reconstruction is to provide information of event vertex and energy with desired accuracy for further data analysis. The events of interest with energy deposition below 10 MeV can be treated as point-like events. The track reconstruction, even for high energy events, is not required. It is also not essential to reconstruct accurate energy for muon events, since they will be identified by setting an energy threshold, such as larger than 20 MeV used in Daya Bay analysis, then followed by a full detector veto [98]. The water Cherenkov veto detectors are used to tag the cosmic muons by detecting Cherenkov light and provide shielding for central detector. An additional plastic scintillator detector will be placed on top of the central detector thus improving the performance of tagging vertical muons. No track reconstruction is needed for the veto detectors. All reconstruction algorithms are required to be developed within the SNiPER framework. Vertex reconstruction The vertex information is critical to correct for the energy non-uniformity in the TAO detector and improve the energy resolution. A proper vertex reconstruction is needed to correctly apply a fiducial volume cut and correlate the prompt signal and delayed signal of IBD events in space. For these reasons a vertex position reconstruction is required with a resolution better than 5 cm. Both charge and time information recorded by electronics system can be used to reconstruct the event vertex. At the R&D stage of the TAO experiment, a vertex reconstruction algorithm named center of charge and based only on the charge information has been used. This method is also successfully used in Daya Bay experiment [98]. The charge weightedx position of the vertex of one event can be simply calculated via the formula: in which N is the total number of fired channels. r i and q i denote the position and detected charge of the i-th fired readout channel, respectively. The vertex position obtained using Eq. 8.1 has intrinsic bias caused by geometrical effects. For a spherical detector, such as TAO, this effect can be accurately predicted by performing a simple mathematical calculation as shown in Eq 8.2 (taking position in x direction as an example) where x 0 represents the true x position of an event in the detector; x indicates different x positions of photo-sensors on the sphere; dΩ denotes the open angle of photo-sensors to the true vertex position at x position on the sphere; d is a distance between the true vertex and the sensors. Eq. 8.2 shows that the position reconstructed using Eq. 8.1 is 2/3 of the true position. This relationship is reproduced with the TAO simulation software by simulating positron at rest deployed at different positions along the vertical z-axis and is illustrated in Figure 8-5. The plot shows the vertex position reconstructed according to Eq. 8.1 as a function of the event true position. The linear dependence has an expected slope of 2/3 as predicted by Eq. 8.2. The linear relationship between the charge center and the true position, shown in Figure 8-5, will be significantly changed after taking more effects related to optical physics processes into account, such as absorption, scattering, refraction and reflection. In particular, simulations show that the reflections on SiPM surface can make the aforementioned linear correction function no longer valid. Figure 8-6 shows the position obtained using Eq. 8.1 for positrons at rest (blue) and with 2 MeV (red) kinetic energy at different positions along the z-axis. The relation is no longer linear and a more complex relation is therefore needed. Figure 8-7 shows the differences of reconstructed positions and the true positions along z coordinate for the prompt emission of IBD events in a fiducial volume with 65 cm radius. The linear correction function is used for the case of without SiPMs' reflections, and a 3rd order polynomial function is used for the case with reflections. The reflections of SiPMs can also worsen the vertex resolution, but it still remains better than 5 cm. In reality, the correction function can be obtained from the calibration data, like the functions used in Daya Bay experiment [99], since the true positions of the calibration sources are known. However, in the TAO detector, only one ACU will be installed and only the correction function along Z axis can be evaluated. The same function can be used for other axes if the good uniformity of the detector response can be guaranteed. Otherwise, we have to combine the calibration data and Monte Carlo simulation to obtain the correction functions for the whole detector volume. More effects will be added in the simulations, such as dark noise, typically 100 Hz/mm 2 , and correlated noise of SiPMs, etc., and their impacts on vertex reconstruction will be also carefully investigated. Meanwhile, a time-based reconstruction method like the one used in JUNO will also be implemented [100], which is expected to have a better performance respect to the center of charge method, since SiPMs have much better timing performance, compared with PMTs. Besides the traditional methods, the performance of some modern vertex reconstruction methods based on machine learning will also be investigated. Energy reconstruction The charge information of each readout channel can be used to reconstruct the event energy. The gain calibration for each readout channel can be performed using a low intensity light source or a low energy calibration source. Then, based on the gain of each channel, the charge can be converted to the number of photoelectrons. The total number of photoelectrons can be therefore estimated by summing up all readout channels. A calibration source placed at the detector center will be used to evaluate the energy scale and therefore convert the number of photoelectrons to the reconstructed energy. The non-linearity of the energy response will be reconstructed by using calibration data from different sources with known energies. Figure 8-8 shows a scattering plot of the total number of photoelectrons as a function of the distance from the detector center obtained by simulating electrons with 1 MeV kinetic energy uniformly distributed in the whole detector volume. The plot shows a position dependence of energy response within the detector and it can be corrected with αs from radioactivities in GdLS or with cosmogenic neutrons, since they are expected to be distributed in good quantity everywhere in the detector. The maximum likelihood fitting is a powerful method to reconstruct the event energy and will be implemented in TAO offline software. It is based on the knowledge of the detector response model, including the light yield and the attenuation length of the liquid scintillator and the LAB buffer, the angular response of the SiPMs, and the SiPM charge resolution, as well as the reconstructed vertex as input. Machine learning based algorithms will also be investigated and developed for TAO in order to improve the energy reconstruction and achieve the desired energy resolution better than 2% for 1 MeV energy deposit. Event display The event display is a useful tool to show the detector structure and the event topology, in particular it can serve as an online monitor during data taking and can also help to improve the reconstruction algorithms and data analysis. The requirements of the TAO's event display system are almost same with that in JUNO. In JUNO, two event display systems have been designed and implemented. One is developed based on the ROOT Event Visualization Environment (EVE) package [101] and another one is based on the Unity engine [102]. Thanks to the same strategies used in detector geometry management and event data model in JUNO and TAO, the developed ROOT based event display for JUNO can be transferred to TAO with limited modifications, mainly due to different variables stored in event data objects. The ROOT based event display is integrated in the offline software system as a module which usually is set up on servers with Scientific Linux system. It can easily load the geometry file of the detector to get the structure information and uses the EVE package to generate the visual objects. Then it reads the event information from the different stages of offline data processing. It changes the visual effects of the geometry objects based on hit information and shows the detailed event information based on results of event reconstruction. Users are able to select the event that they want to display using the GUI interface. The Unitybased event display has less dependence on the JUNO offline software, so it can be easily deployed onto different platforms after development. Besides, as a game engine, it is easier for the developer to realize fancier visual effects. However, the extra data conversion is necessary to get the event data from the ROOT format file generated by the offline software. We expect more efforts are needed to implement the Unity-based event display for TAO, due to the data conversion. It will be decided later weather or not to use the Unity-based event display in TAO, depending on manpower. Database Database is an indispensable component in offline data processing. It plays an important role in event data processing and data analysis. It is used to store many important information, such as detector running parameters, calibration constants, geometry parameters, optical properties of kinds of materials, schema evolution, bookkeeping and so on. Meanwhile, it also provides access to all of this information via their management services, which allow users to create, query, modify or delete stored data. Recently, the conditions database system has been designed and a database prototype has been established for testing in JUNO offline system which in envisaged to be used in TAO as well. The design schema of the conditions database system containing three layers is shown in Figure 8 Two additional auxiliary tables are used to collect Tag-IOV Map and Global Tag-Tag Map. In the client layer, the conditions database service is developed in the SNiPER framework to perform conversions from Persistent Object to Transient Object and provide database interface for different applications, such as simulation, reconstruction, data analysis, etc. An intermediate layer (Frontier/Squid) between the client and the server is adopted to provide data caching capabilities, which can efficiently decrease the heavy burden of center database when users frequently query the same conditions data at the same time. The web interface is developed for experts to manage the data in database server. In TAO offline data processing, we will share the same database servers and conditions database framework used in JUNO. However, due to the different information stored in database for TAO, a new conditions database service will be implemented, which should inherit the existing JUNO database service with extended functionalities and fulfills the requirements of TAO. Requirements The TAO detector will produce about 0.4 PB or even less raw data every year, which will be transferred back to the Computing Center at the Institute of High Energy Physics (IHEP) in Beijing through a network connection. Meanwhile, a similar data volume of Monte Carlo (MC) data will also be produced for physics analysis. Both raw data and MC data need to be shared among collaboration members via network. JUNO will establish a computing farm with about 12,000 CPU cores, 10 PB disk storage and 30 PB archive, in which 2000 CPU cores will be deployed for TAO offline data processing. The computing nodes and storage servers will be connected to each other by a 40-Gbps backbone high-speed switching network. Moreover, the platform will also integrate the computing resources contributed by outside members via a distributed computing environment. Compared with about 2.4 PB data every year in JUNO, the data volume in TAO is not significant. The performance, security and reliability of the data transfer in JUNO have been intensively considered. Therefore, we will use the same tools developed for JUNO data transfer to transfer TAO's raw data from experimental site to IHEP's computing center. Data transfer Unlike JUNO planning to store the whole PMT waveforms, TAO will only store the charge and time information for each readout channel, meanwhile, the second level trigger is proposed and will be applied on onsite DAQ cluster or at read-out board FPGAs level to further reduce the data rate. By considering the event rate and event size in the TAO detector, the predicted data volume is estimated to be about 100 Mbps or less. Since only limited computing resources are expected to be deployed onsite, it is critical to transfer all raw data from experimental site to IHEP data center. By considering the raw data volume produced by the TAO detector, a link with bandwidth of about 150 Mbps is sufficient for a stable data transfer. Same with the network used by JUNO, we will also use the network provided by the Chinese Science and Technology Network (CSTNet) [105]. The data will first be transferred to IHEP through the link, then relayed to collaborating sites through CSTNet. At present, IHEP is connected to the CSTNet core network through two 10 Gbps links, one of which supports IPv4 and the other IPv6. The bandwidth from IHEP to the USA is 10 Gbps and from IHEP to Europe is 5 Gbps, both of which are through CSTNet and have good network performance. The architecture of the network including TAO, other experiments in China, Chinese clusters and outside world is shown in Figure 8-10. During the transfers, the checksum of the data will also be transferred to ensure the data integrity. If the data integrity check is failed, the raw data should be re-transferred. After the data transfer is completed and all the data integrity checks are passed, the status of the data in the DAQ local disk cache will be marked as TRANSFERRED. Moreover, a high/low water line deletion algorithm will be used to clear the outdated data in DAQ local disk cache. Most of the data will be transferred automatically by the data transfer system in order to avoid possible human errors. To ensure the stability and robustness of the data transferring system, a monitoring system will be developed and deployed. It will monitor the data transfer and sharing status in real-time and track the efficiency of the data transfer system. Combined with the status of the IT infrastructure (including network bandwidth), the data transfer system will optimize the transfer path and recover transfer failures automatically to improve the performance and stability of the system. Data storage The maximum raw data volume produced by the TAO detector is about 1.2 PB with 3 years running. All of these data will be transferred from onsite DAQ to IHEP and stored in disks at the IHEP data center, meanwhile, at least one full copy of data needs to be stored on disk or tape, to ensure safety. The data will be transferred to other computing center(s) outside IHEP to share the data with collaboration members. The simulated data volume will be at the same level with the raw data. Facility and Installation Assembly and installation considerations should go all through the design and R&D of the detector system. Careful logistical coordination will be essential for the receiving, staging, assembly, installation, and testing of all detector components and subsystems, especially handling all the works around the nuclear reactor. This section discusses some of the considerations in the installation process and outlines a plan for the assembly and running of the detector system. The working group of JUNO-TAO has a wide range of experience in the installation and operation of large detector systems, including the engineering and installation activities of Daya Bay, JUNO and Darkside-50. The effort of WBS (Work Breakdown Structure) supports the overall planning, staging, control and execution of the final assembly and installation. It includes labor, materials and universal equipment required to perform these functions. The assembly work of the detector will be finished in the neutrino laboratory in the campus of the Taishan Nuclear Power Plant. Due to the strict regulations on accessing the core area of the nuclear power plant, the detector will be pre-assembled at IHEP to minimize the technical risks and workload in the power plant. The assembly and installation work of JUNO-TAO includes: 1. Detector components will be fabricated and shipped to IHEP progressively. 2. Components and subsystem will be tested before and during assembly. 3. The central detector without, or with a small fraction of, SiPMs and electronics will be assembled and tested at IHEP first. 4. SiPMs, electronics, veto and shielding will be tested at IHEP, may or may not be integrated with the central detector test. 5. The Taishan Neutrino Laboratory will be refurbished, instrumented, and prepared for the detector installation and operation. 6. Detector components will be disassembled, shipped to the Taishan laboratory, and re-assembled again, except that the stainless steel tank (SST) and the acrylic vessel will not be re-used due to the transportation passage limitation of the laboratory. New SST and acrylic vessel components will be fabricated and welded/bonded in the Taishan laboratory. 7. GdLS will be produced at either IHEP or the Daya Bay site. All detector liquids will be transported to the Taishan laboratory and handled in clean and safe. The laboratory and facilities, assembly and installation of detector subsystems, and project management issues will be described in the following subsections. Laboratory and facility A neutrino laboratory will be set up in the Taishan Nuclear Power Plant. The laboratory is in a basement at 9.6 m underground outside of the concrete containment shell, about 30 m in horizontal distance to the center of the reactor core. Vertical overburden is roughly estimated to be ∼ 5 meters-water-equivalent, which comes basically from the concrete floors and the roof of the Editors: Jun Cao (caoj@ihep.ac.cn), Zhimin Wang (wangzhm@ihep.ac.cn) and Yuguang Xie (ygxie@ihep.ac.cn) Major contributor: Guofu Cao building. Muon rate and cosmogenic neutron rate were measured onsite with a plastic scintillator detector and Bonner Spheres, respectively, to be 1/3 of that on the ground. Radioactive dosage is measured with a hand-held dose meter to be ∼ 0.4µSr/h, twice higher than the ambient environment in the campus, which might largely come from the contribution of the thick concrete wall. The power supply, water supply, and ventilation are ready and satisfy the requirements of JUNO-TAO. The laboratory can be accessed via a stairway and an elevator. The elevator has a dimension of 1990 (Depth) × 1390 (Width) × 1990 (Height) mm and a rated load of 2.5 ton, posing a strict limitation to the size of all detector components. The detector design and installation plan have taken this limitation into account. The layout of the Taishan Neutrino Laboratory is shown in Figure 9-1. The pink blocks show the footprint of the TAO detector and relevant facilities, including the refrigerator, crates for electronics, DAQ, offline computer, network server, etc. The reactor core is about 30 m in the north-west direction. The height of the laboratory is close to 5 m, but there is a steel beam structure on the roof, resulting in an available height of 3.85 m. The laboratory will be set up with facilities for assembly, commissioning and running, including safety related monitoring, network and management. A class 10,000 clean tent will be set up for the assembly of the central detector. Details and management rules will be elaborated with the power plant, following their requirements. Central detector Assembly of the central detector includes SST welding, acrylic bonding, integration of SiPM tiles and Frontend Electronics (FEE) with the copper shell, assembling all above structures, cabling, and filling liquid, etc. The general consideration of the installation sequence is shown in Figure 9-2. The stainless steel tank will be made of 6 pieces for the barrel, 3 pieces for the bottom panel, and 3 pieces for the lid. These parts are shaped in factory and shipped to the neutrino laboratory. They will be welded together with tools to achieve required precision. The flange between the barrel and the lid need special attention since it has to be made from 3 pieces onsite due to transportation limitation. The welding lines of SST will be processed with local acid-pickling and passivation. The acrylic vessel will be bonded via polymerization from 3 pieces. An alternative way is to clamp or latch the acrylic pieces together without bonding, and put a liquid bag inside to contain the GdLS, as described in Sec. 3.2.3. The SiPM tiles and the frontend electronics boards will be mounted onto the partitions of the copper shell and tested at IHEP. After that the parts will be wrapped with plastic film and shipped to onsite. A clean tent is needed for the assembly of the acrylic vessel and the copper shell. The parts of the copper shell with SiPM tiles mounted on its inner surface are bolted together, with the bonded (clamped) acrylic vessel inside. Then the whole copper shell will be rotated from vertical position to horizontal position. The SST will perform the same rotation. The assembled copper shell will be installed into the SST horizontally along three guide rails on the wall of the SST. The SST will then be rotated from horizontal position to vertical position with the copper shell fixed inside temporarily with tools. For the above assembly and install procedure, available assembly space in the laboratory and possible conflicts with other subsystems have been preliminarily considered. Temporary lifting equipment will be set up in the laboratory. The height of the laboratory available for the detector is 3.85 m due to the steel beams on the roof. Space between beams is higher, which could be used to set up the lifting equipment. The assembly and rotation operation of the central detector is possible in principle while further consideration is needed. Pure water system and nitrogen system will also be set up in the laboratory. More details will be worked out. SiPM and electronics The assembly of SiPM and electronics is a challenge since it is fragile and the clearance between SiPM tiles is tiny to achieve as high as possible photo-sensor coverage. Figure 9-3 shows the preliminary consideration on how to fix a SiPM tile (on a PCB) on the copper shell, and connect to its FEE PCB. Both the distortion of the spherical copper shell and the accuracy of connecting holes are critical. Different thermal expansion of SiPM, PCB, bolt, and copper need be considered for the large range of temperature change from 25 • C during assembly to -50 • C during operation. A preliminary thermal simulation shows that the detector assembly is safe during the cooling down process. Given the risks in this assembly work, a prototype including a fraction of the copper shell and tens of the PCBs will be tested soon to practice the assembly and identify the potential design and assembly problems. Then a full-size prototype with blank PCBs as a proxy of SiPMs will be tested at IHEP. Sensors and monitoring The detector will be monitored with sensors for the temperature, liquid level, humidity, ethanol content, gas pressure, and power, during assembly, filling, and running. The sensors will be installed along with the assembly of main mechanical structure, and be integrated in the Detector Control System (DCS), which can be monitored both onsite and offsite via network. Liquid filling A lot of experiences of the liquid scintillator handling could be borrowed from Daya Bay and JUNO. Acrylic vessels or barrels with ETFE liquid bag inside would be used to ship and store the GdLS (about 2.8 t) to avoid contamination. Pumps of ∼ 800 L/hour flux with all parts contacting the GdLS being fluorine plastic, similar to what is used in JUNO R&D, will be used to fill the GdLS from storage container to the CD through fluorine plastic pipe. Sensors will be installed in the CD to monitor the liquid level of the GdLS and the buffer liquid. The GdLS and the buffer liquid will be filled with certain synchronization, controlling the liquid level difference being smaller than 250 mm as suggested by the analysis of the stress of the acrylic vessel. The flow will be monitored with Coriolis mass flow meter and volume flow meter. Calibration system The ACU will be installed on the overflow tank, which is on top of the SST lid, through a flange after the CD is installed at its location. Figure 9-4 shows the ACU on a Daya Bay detector. One of the ACU will be used in JUNO-TAO after decommissioned from Daya Bay. It is an assembly that includes the internal source drive, the outer bell-jar, and a bottom plate to shield the radioactive sources stored in ACU. When it is rigged from the bottom plate, the ACU can be considered as a monolithic structure of ∼ 100 kg. Its installation without an overhead crane in the laboratory of limited height needs be worked out. A movable lifting arm is an option. A specially designed thermal insulation hat made of High Density Polyethylene (HDPE) will cover the ACU. Veto and shielding system The veto and shielding system include the bottom shield of the CD made of lead, three water tanks surrounding the CD, the HDPE shielding material above the CD, and the plastic scintillator detectors on top of the HDPE shielding. The water tanks are instrumented with 3-inch PMTs to serve as water Cherenkov muon detectors. The installation sequence of the veto and shielding system is shown in Figure 9-5, and is listed in the following. On top of the water tanks there are manholes. PMTs and reflective film Tyvek will be installed inside the water tank in parallel with other installations. The movable water tank part III is designed to sit on a carrier. It is the last "door" that close the shielding of the CD. In case of inspection or repairing is needed for the CD, it can be opened to access the CD. Integration & running The whole TAO detector will be formed through integrating all subsystems. To ensure a smooth integration process, the interfaces between subsystems should be well defined and reviewed during the detector mechanical design. A detailed detector assembling procedure has to be established and reviewed before the detector installation. To minimize the onsite workload and potential technical risks, we will process a detector pre-assembly at IHEP with most of detector components and subsystems, then the detector will be disassembled and shipped to onsite, and finally assembled again. This process can help us to verify the installation procedure and find potential problems, to train workers and improve the onsite installation efficiency, to identify the required tools and equipment, to provide more accurate estimations of the required man power and time for onsite detector installation. According to the experiences gained during the detector pre-assembly, we can make the Taishan Neutrino Laboratory to be well prepared for the detector installation and operation. Since not all of the detector components can be pre-assembled, in particular for cables, the extra onsite workload should be carefully estimated. Related installation procedure needs to be well defined before the start of onsite installation. We will start the detector commissioning before filling the liquid to the detector. At this stage, all SiPMs and electronics will be operated and tested by using a light source or SiPMs' dark noises, in order to find potential issues of SiPM sensors, electronics and cabling. The DAQ/DCS subsystems will also be tested and tuned. The following main goals are expected to be achieved during the detector commissioning. 1. SiPMs and electronics can work normally both at room temperature and cryogenic tempera-As a satellite experiment of JUNO, the safety management of JUNO-TAO will follow the management of JUNO, and also follow the management of the Taishan Nuclear Power Plant, whichever stricter. Some key points are summarized below. • Design and technical review. Safety consideration need be rooted in the design of the experiment. It should be guaranteed through technical reviews. All documents including engineering drawings will be archived. • Hazard management. Risks and hazards should be identified and classified for the experiment, each subsystem, and the assembly and installation work. A dedicated Hazard and Operability Study (HAZOP) analysis will be performed on the whole JUNO-TAO detector and subsystems Material Safety Data Sheet (MSDS) for chemicals should be archived. • Safety officers. Safety officer of the experiment will approve the safety management rules and oversee the onsite activities. For each work carried out onsite, a designated local safety officer will actually oversee the whole work process. • Onsite work control. Every work onsite needs a procedure being approved by the safety officer and a Task Control Form being approved by onsite manager. • Safety training. A training program, in conjunction with the power plant, will be developed. All personnel entering the laboratory will have site training. The training will include how personnel should report and respond to emergencies. • Response to emergency. Documents and laboratory instructions will be prepared to indicate how personnel should react for different emergencies. Appropriate safety design and operational mitigation of safety risks associated with the use of flammable liquids will be established. • Environment protection. The Project is committed to protect the environment. The primary environmental concerns are with possible spills of liquid scintillator and the use of cleaning solutions. The amount of cleaning solution is small. We do not anticipate any radiological issues, but all sealed sources will be inventoried and tracked and radon will be monitored. • Supervision from the power plant. Usually the power plant will have safety expert to supervise the activities happening onsite. We understand that the power plant will provide emergency response, including fire, police and emergency medical response. These forces are connected to the local government, but administratively report to the power plant.
2020-05-19T01:01:07.623Z
2020-05-18T00:00:00.000
{ "year": 2020, "sha1": "9706de3c17ac58878c4b8e2c7ef8cb3f64a7d0fc", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "9706de3c17ac58878c4b8e2c7ef8cb3f64a7d0fc", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }